User Tools

Site Tools


zephyr

Table of Contents

https://www.zephyrproject.org/sites/local-zephyr/files/zephyr_project_technical_overview.pdf

https://wiki.zephyrproject.org/view/Development_Model

https://wiki.zephyrproject.org/view/Collaboration_Guidelines

Boards

CoAP

Hi,

Extracted from:

https://gerrit.zephyrproject.org/r/#/c/2487/

–8←————–cut here—————start————→8—

Quapi[1] - Basic CoAP for Zephyr ################################

CoAP[2] is a widely used protocol for IoT communications, used in the OCF and LWM2M specifications, for example.

Concepts behind Quapi: - basic, it will be enough to be compliant with the specification and to

 implement concrete use cases from OCF and LWM2M, for example;

- network independence, to make the transition to others network stacks easier,

 and testing trivial;

- predicable memory usage, avoiding dynamic memory allocation and keeping

 the necessary state tracking mechanisms visible to the user;

The possibility of porting `sol-coap` (from Soletta[3]) to Zephyr was considered, but the differences in the concurrence model of Soletta and Zephyr, and the dependence of dynamic memory allocation would make the porting difficult.

`libcoap` [4] is a pretty complete implementation, but too complicated because it heavily depends on the network stack API and concepts, and even if it's possible to make it to not allocate memory, much of it is not directly exposed to the user.

You can reach the current implementation like this:

git clone http://ab01.bz.intel.com/~vcostago/quapi.git

There a couple of basic unit tests (nothing fancy) and very simple example, and a few FIXME's :-)

If you don't feel like cloning the repository, here is the API::

#include <stdint.h>
#include <stddef.h>
#include <stdbool.h>
/* Useless forward declaration */
struct quapi_resource;
struct quapi_packet;
struct quapi_pending;
enum quapi_option_num {
    QUAPI_OPTION_IF_MATCH = 1,
    QUAPI_OPTION_URI_HOST = 3,
    QUAPI_OPTION_ETAG = 4,
    QUAPI_OPTION_IF_NONE_MATCH = 5,
    QUAPI_OPTION_OBSERVE = 6,
    QUAPI_OPTION_URI_PORT = 7,
    QUAPI_OPTION_LOCATION_PATH = 8,
    QUAPI_OPTION_URI_PATH = 11,
    QUAPI_OPTION_CONTENT_FORMAT = 12,
    QUAPI_OPTION_MAX_AGE = 14,
    QUAPI_OPTION_URI_QUERY = 15,
    QUAPI_OPTION_ACCEPT = 17,
    QUAPI_OPTION_LOCATION_QUERY = 20,
    QUAPI_OPTION_PROXY_URI = 35,
    QUAPI_OPTION_PROXY_SCHEME = 39
};
enum quapi_method {
    QUAPI_METHOD_GET = 1,
    QUAPI_METHOD_POST = 2,
    QUAPI_METHOD_PUT = 3,
    QUAPI_METHOD_DELETE = 4,
};
#define QUAPI_REQUEST_MASK 0x07
enum quapi_msgtype {
    QUAPI_TYPE_CON = 0,
    QUAPI_TYPE_NON_CON = 1,
    QUAPI_TYPE_ACK = 2,
    QUAPI_TYPE_RESET = 3
};
#define quapi_make_response_code(clas, det) ((clas << 5) | (det))
enum quapi_response_code {
    QUAPI_RESPONSE_CODE_OK = quapi_make_response_code(2, 0),
    QUAPI_RESPONSE_CODE_CREATED = quapi_make_response_code(2, 1),
    QUAPI_RESPONSE_CODE_DELETED = quapi_make_response_code(2, 2),
    QUAPI_RESPONSE_CODE_VALID = quapi_make_response_code(2, 3),
    QUAPI_RESPONSE_CODE_CHANGED = quapi_make_response_code(2, 4),
    QUAPI_RESPONSE_CODE_CONTENT = quapi_make_response_code(2, 5),
    QUAPI_RESPONSE_CODE_BAD_REQUEST = quapi_make_response_code(4, 0),
    QUAPI_RESPONSE_CODE_UNAUTHORIZED = quapi_make_response_code(4, 1),
    QUAPI_RESPONSE_CODE_BAD_OPTION = quapi_make_response_code(4, 2),
    QUAPI_RESPONSE_CODE_FORBIDDEN = quapi_make_response_code(4, 3),
    QUAPI_RESPONSE_CODE_NOT_FOUND = quapi_make_response_code(4, 4),
    QUAPI_RESPONSE_CODE_NOT_ALLOWED = quapi_make_response_code(4, 5),
    QUAPI_RESPONSE_CODE_NOT_ACCEPTABLE = quapi_make_response_code(4, 6),
    QUAPI_RESPONSE_CODE_PRECONDITION_FAILED = quapi_make_response_code(4, 12),
    QUAPI_RESPONSE_CODE_REQUEST_TOO_LARGE = quapi_make_response_code(4, 13),
    QUAPI_RESPONSE_CODE_INTERNAL_ERROR = quapi_make_response_code(5, 0),
    QUAPI_RESPONSE_CODE_NOT_IMPLEMENTED = quapi_make_response_code(5, 1),
    QUAPI_RESPONSE_CODE_BAD_GATEWAY = quapi_make_response_code(5, 2),
    QUAPI_RESPONSE_CODE_SERVICE_UNAVAILABLE = quapi_make_response_code(5, 3),
    QUAPI_RESPONSE_CODE_GATEWAY_TIMEOUT = quapi_make_response_code(5, 4),
    QUAPI_RESPONSE_CODE_PROXYING_NOT_SUPPORTED = quapi_make_response_code(5, 5)
};
#define QUAPI_CODE_EMPTY (0)
typedef int (*quapi_method_t)(const struct quapi_resource *resource,
		struct quapi_packet *request,
		const void *addr);
struct quapi_resource {
	quapi_method_t get, post, put, del;
	const char **path;
	void *user_data;
};
struct quapi_packet {
	uint8_t *buf;
	uint8_t *start;
	uint16_t len;
	uint16_t used;
};
typedef int (*quapi_reply_t)(const struct quapi_packet *response,
		struct quapi_pending *pending);
struct quapi_pending {
	struct quapi_packet request;
	uint16_t timeout;
	quapi_reply_t reply;
	void *user_data;
};
struct quapi_option {
	void *value;
	uint16_t len;
};
/* buf must be valid while pkt is used */
int quapi_packet_parse(struct quapi_packet *pkt, uint8_t *buf, size_t len);
/* buf must be valid while pkt is used */
int quapi_packet_init(struct quapi_packet *pkt,
		uint8_t *buf, size_t len);
int quapi_pending_init(struct quapi_pending *pending,
		const struct quapi_packet *request);
struct quapi_pending *quapi_pending_next_unused(
		struct quapi_pending *pendings);
/*
 * After a response is received, clear all pending retransmissions related to
 * that response.
 */
struct quapi_pending *quapi_pending_received(
		const struct quapi_packet *response,
		struct quapi_pending *pendings);
/*
 * Returns the next pending about to expire, pending->timeout informs how many
 * ms to next expiration.
 */
struct quapi_pending *quapi_pending_next_to_expire(
		struct quapi_pending *pendings);
/*
 * After a request is sent, user may want to cycle the pending retransmission
 * so the timeout is updated. Returns false if this is the last
 * retransmission.
 */
bool quapi_pending_cycle(struct quapi_pending *pending);
/*
 * When a request is received, call the appropriate methods of the
 * matching resources.
 */
int quapi_handle_request(struct quapi_packet *pkt,
		const struct quapi_resource *resources,
		const void *from);
/*
 * Same logic as the first implementation of sol-coap, returns a pointer to
 * the start of the payload, and how much memory is available (to the
 * payload), it will also insert he COAP_MARKER (0xFF).
 */
uint8_t *quapi_packet_get_payload(struct quapi_packet *pkt, uint16_t *len);
int quapi_packet_set_used(struct quapi_packet *pkt, uint16_t len);
int quapi_add_option(struct quapi_packet *pkt, uint16_t code,
		const void *value, uint16_t len);
int quapi_find_options(const struct quapi_packet *pkt, uint16_t code,
		struct quapi_option *options, uint16_t veclen);
/* This stuff isn't interesting */
uint8_t quapi_header_get_version(const struct quapi_packet *pkt);
uint8_t quapi_header_get_type(const struct quapi_packet *pkt);
const uint8_t *quapi_header_get_token(const struct quapi_packet *pkt,
		uint8_t *len);
uint8_t quapi_header_get_code(const struct quapi_packet *pkt);
uint16_t quapi_header_get_id(const struct quapi_packet *pkt);
void quapi_header_set_version(struct quapi_packet *pkt, uint8_t ver);
void quapi_header_set_type(struct quapi_packet *pkt, uint8_t type);
int quapi_header_set_token(struct quapi_packet *pkt, const uint8_t *token,
		uint8_t tokenlen);
void quapi_header_set_code(struct quapi_packet *pkt, uint8_t code);
void quapi_header_set_id(struct quapi_packet *pkt, uint16_t id);

Another thing that I would appreciate is to not focus on the names of the functions (at first), instead focusing on the primitives provided. The lack of documentation is a feature for now :-)

One thing that I couldn't decide is how to expose the “observe”[5] functionality without at least the abstraction of a network address. Suggestions are welcome.

[1] The name is a placeholder, the only reason it was chosen was that it sounds like 'coap' (for portuguese speakers), but I would like that this kind of discussion would happen after the consensus is reached that this approach has value :-)

[2] https://tools.ietf.org/html/rfc7252

[3] https://github.com/solettaproject/soletta

[4] https://libcoap.net/

[5] https://tools.ietf.org/html/rfc7641

Unified kernel

      RFC - Unified Kernel
      ####################
      The Zephyr dual-kernel model is confusing for some, and has been found
      to be somewhat harder to use than more cookie-cutter RTOS kernels by
      even experienced developers, especially for writing middleware meant to
      be used by both of its modes. Also, while the nanokernel performances
      for both footprint and speed of execution (context-switch time, latency,
      etc.), the microkernel could gain to be improved on both fronts.
      Problems
      ********
      Ease of use of the Zephyr kernel model
      ======================================
      The Zephyr kernel uses a model that is not intuitive to a majority of
      developers with its nanokernel/microkernel split. That model is usable
      by developers trying to write an application that is to run on a
      specific hardware platform, and thus where they can choose which
      configuration is best for them, when taking into account memory size,
      clock speed, etc.  However, developers trying to write middleware that
      can run in both configurations often target the nanokernel API since it
      is available in both. This is not necessarily the best approach,
      especially for microkernel systems. Thus, in theory, a nanokernel-only
      system and a microkernel system should be treated as two different
      kernels, or operating systems, but that demands a duplication of
      efforts.
      Microkernel performance (speed and footprint)
      =============================================
      The nanokernel speed and footprint is fine, and probably close to being
      optimal, when thinking about interrupt latency, object operations, etc.
      However, raw microkernel performance could be improved by removing the
      need for the so-called "double-context switch" that happens when a
      preemptible task interacts with the kernel, such as when it tries to
      acquire a mutex object.  This is because the way the tasks interact with
      the microkernel is by sending a message to the kernel, and waiting for a
      reply. The way the messaging is implemented is by having a task create a
      message on its stack, then post it on a nanokernel stack object, on
      which a fiber, the microkernel server fiber, is always waiting: since a
      fiber always preempts a task, the task is context-switched out, and the
      fiber context-switched in. The fiber then does the operation on the
      mutex object, and sends a reply to the task, different depending if the
      mutex was free or not. The fiber goes back to sleeping on the stack
      object, and the task resumes.  So, even in the event that a mutex was
      free, a double context switch still happened. This might sound
      ridiculous, or at least overkill, but for the fact that the microkernel
      was originally written for distributed processors operating as one: in
      that case, an inter-processor messaging system was needed to operate on
      objects residing on remote nodes, and the same model was applied to the
      local node as well, for a concise and uniform model. However, in the
      single-CPU systems Zephyr targets, this does not make sense anymore.
      Duplication of object types between the nanokernel and microkernel
      ==========================================================
      ========
      There are nanokernel counting semaphores, and microkernel counting
      semaphores, both with pretty much the same characteristics. There are
      nanokernel FIFOs and microkernel FIFOs, with completely different
      characteristics. Fibers can operate on both semaphore types, but they
      cannot take a microkernel semaphore, only give it. Fibers cannot operate
      at all on microkernel FIFOs.
      Also, there are nanokernel and microkernel timers, and there are
      nanokernel timeouts (although to be fair, timeouts are an internal
      concept).
      This is confusing to users.
      Fibers cannot perform most microkernel operations
      =================================================
      The limitation on fibers not being able to perform most operations on
      microkernel objects comes from the implementation of the microkernel
      messaging system. A sender creates a packet on its stack, makes it
      available to the microkernel server fiber, and waits for a reply. This
      worked well when the sender is a task, since it yields immediately to
      the fiber. For a fiber, this does not work, since a fiber does not yield
      implicitly, and thus the packet on the stack is popped as soon as the
      function returns.
      For fibers or ISRs to operate on microkernel object, then must obtain a
      packet from a pool, and queue it. However, a) the pool is finite and
      could run out, and b) the fiber/ISR doing this does not wait for a
      reply, so only certain operations are possible. Another way of doing
      this, and this what is does for fiber/ISR sem_take and event_send, is to
      encode the information of the operation not in a packet, but in a 32-bit
      value that is pushed to the microkernel server, thus removing the packet
      pool exhaustion issue. This trick uses the low 2-bits of an aligned
      object's address to encode the operation, and is thus very limited.
      However, it does not solve the fact that a fiber/ISR cannot wait for a
      reply.
      Operation by tasks on nanokernel objects are inefficient
      ========================================================
      Until very recently, tasks used to busy-wait on nanokernel objects,
      which for all intents and purposes, locks the microkernel scheduler and
      prevents tasks of lower priority from running during that time. This can
      cause major inefficiencies and makes it extremely easy to cause
      deadlocks when used in microkernel systems.
      A recent change introduced tasks pending on nanokernel objects in a
      microkernel.  However, nanokernel objects were never meant to operate
      like that, and the microkernel was not designed for that either. Making
      this change introduced some interrupt latency on operations that change
      a task's state, such a blocking or being put in the ready queue, and is
      more a stopgap measure than a proper design.
      The system initialization is running in the idle task
      =====================================================
      The idle task cannot sleep, since it is the context the nanokernel must
      schedule in when no other context is available.
      Having it be the context to run initialization prevents initialization
      code from device drivers and subsystems to be very careful about what
      APIs it calls, and demands having special cases in some API
      implementations to handle the case of running in the idle task.
      This is also confusing to users.
  [AL] I agree that this is a problem, but do you have a proposal for
  addressing it? I didn't notice one in the set of proposed changes that
  appear below.

You're right, it's not spelled out in explicitly. I mention below that the init thread can be configured to be either coop or preempt. It's missing the mention that the init thread is decoupled from the idle thread. We'll keep an idle thread for now to ease the transition, but in the future, it could be possible to remove it entirely.

      Microkernel objects and tasks cannot be created at runtime
      ==========================================================
      This can cause problems when porting code that needs to create them when
      a certain event happens, e.g. a new connection from an external agent.
      Proposed changes
      ****************
      Drop the microkernel server
      ===========================
      The message-passing microkernel model is removed. The only remaining
      kernel/scheduler is the nanokernel.
      The message passing was used originally for :abbr:`VSP (Virtual Single
      Processor)`, the multi-processor model in Zephyr's ancestor. Since we
      are targeting single-CPU systems, it is now irrelevant. All the
      scheduling decision can be moved to the nanokernel and operation on the
      microkernel objects can be done in the context of the caller, instead of
      transitioning to a intermediate fiber.  Some of the operations now
      require locking interrupts instead of single-threading through a fiber;
      basically, a) operations on objects that can be operated on from ISRs,
      and b) operation on the main kernel queues, like the ready queue. For
      objects that can only be operated on from threads, a concept of 'locking
      the scheduler' is introduced, which is equivalent in concept to
      switching to the microkernel fiber, but much lighter.
      Removing the microkernel server fixes some other issues as well:
  • The microkernel server stack cannot overflow, since it does not exist

anymore.

  • Cooperative threads are now allowed to run for any amount of time in

any system configuration, although this should still be used very

        carefully to avoid preventing other threads to run.
      Native multi-core support (or lack thereof)
      -------------------------------------------
      A small parenthesis here to note that this change removes the
      possibility of reintroducing the VSP concept as it was implemented in
      Zephyr's ancestor's codebase. This RFC does not take into account
      implementing a native multi-core component in the kernel either. It does
      not close the door to implementing some multi-core support as :abbr:`AMP
      (Asymmetric Multi Processor)`, since AMP is always possible to implement
      as an add-on, given the necessary hardware.  In fact, there is already
      some support for AMP in Zephyr, in the IPM libraries.
      Both VSP and traditional AMP do not allow migration of threads across
      cores, so there is nothing lost in losing VSP on that front.
      SMP is definitely not on the radar, due to it being complex, most
      probably too much for a small-footprint and simple kernel such as
      Zephyr.
      Make the nanokernel 'preemptible thread'-aware
      ==============================================
      The nanokernel only understood multiple fibers and *one* task. The
      microkernel was the one taking care of the preemptive multi-tasking
      scheduling, but was not fiber-aware.  Since the goal is to unite the two
      kernels, one of the schedulers goes away, but the one remaining must
      take on its duties. And since the nanokernel already takes care of the
      heavy lifting of switching contexts, and is already fiber-aware, the
      united kernel is built on it.
      It is somewhat trivial to turn the nanokernel's scheduler into a
      preemptive thread-aware one, without compromising on the fast path
      switching to and from interrupts (from a cooperative thread and back to
      it). Basically, it has to take the scheduling decisions the microkernel
      used to take, but only when its own decisions have been taken into
      account and dismissed.
      Unify fibers and tasks as one type of threads
      =============================================
      Fibers and tasks are now both the same type of executing contexts,
      called threads. They only differentiate by their priority: "fibers"
      (cooperative threads) are of priorities lower than 0, and "tasks"
      (preemptive threads) are of priorities greater than or equal to zero.
      There is nothing preventing having a system consisting of only
      cooperative threads, once the system is up-and-running. The
      initialization thread is configurable to be either a preemptive or
      cooperative thread.
      Allow cooperative threads to operate on all types of objects
      ==========================================================
      ==
      This one is straightforward. Since the microkernel message passing model
      is removed, there is no technical reason that should prevent a
      cooperative thread from doing any operation on any type of object.
      Clarify duplicated object types
      ===============================
      The microkernel and nanokernel semaphores are unified as one type of
      counting semaphore object. The object could be enhanced with a 'limit'
      to create the behaviour of binary semaphores.
      The nanokernel and microkernel FIFOs' difference is clarified by
      renaming the microkernel FIFOs to message queues (msg_q) [XXX - anyone
      has a better name ?].
      The microkernel and nanokernel timers are unified as one type of timer
      object, by basically providing the APIs from both types of timers. The
      implementation is based on the nanokernel timer implementation, using
      what was nanokernel timeouts (CONFIG_NANO_TIMEOUTS).
      Create a new, more streamlined API, without any loss of functionality
      ==========================================================
      ===========
      The kernel takes over one more namespace: k_/K_.
      All new APIs are named:
        :code:`k_<something>` or :code:`K_<SOMETHING>`
      The reasoning here is threefold:
  1. There is no clash with the old APIs, to allow mapping the old APIs to

new APIs (see below).

  1. There are only two namespaces reserved by the kernel ('_' and 'k_'),

which makes things easier.

  1. A future version of the kernel will release all previously held

namespaces, such as :code:`task_`, :code:`fiber_`, :code:`nano_`,

        :code:`isr_`, etc. and make them available to applications, except
        the '_' namespace, which will still be used for global private symbols.
      There is no need to provide API that are per-caller's context type.
      Basically, we don't see a :c:func:`nano_task_sem_take` and
      :c:func:`nano_fiber_sem_take` anymore, but simply :c:func:`k_sem_take`.
      The per-context type logic is embedded in the function when needed.
      This should help the footprint when multiple context types need to
      access an object type, since currently we have wrapper functions that
      reference all three types (fiber, task, ISR) anyway, and pull all three
      in the kernel image.
      Originally, :c:func:`nano_sem_take` was a wrapper containing an array
      referencing :c:func:`nano_task_sem_take`, :c:func:`nano_fiber_sem_take`
      and :c:func:`nano_isr_sem_take`.
      In the new kernel, the semaphore library now provides:
        :c:func:`k_sem_init`, :c:func:`k_sem_take`, :c:func:`k_sem_give`
      that can operate in all execution contexts, and that's it.
      The microkernel object types are all adapted to handle being called from
      a cooperative thread.
      The MDEF file format along with the current macros for defining objects
      in code at build-time is still supported to allow build-time creation
      and initialization of objects. However, runtime creation of objects is
      supported as well, similar to what is done for current nanokernel
      objects: a piece of memory can be passed to :code:`_init` APIs to
      initialize it as an object.
      For example, k_mutex_init and k_mem_pool_init are now made available as
      public APIs.
      APIs needing special care
      -------------------------
      Some APIs need special treatment to make them function as they currently
      do:
      Offload to fiber
      ''''''''''''''''
      Of course, a fiber is needed for this API. It is built on top of the
      work_queue library. The system workqueue must be enabled.
      Event handlers
      ''''''''''''''
      Sending an event use to be able to ask the kernel server fiber to run a
      handler. Instead, the handler now runs in the context of the sender.
  [AL] I'm not sure this change is desirable (or even feasible). If
  nothing else, it's a paradigm shift from the way things currently
  work. Maybe event handlers should continue to run in a system fiber,
  at least to start with. We could simply use the one that processes
  offload to fiber requests.

True. We can build this on top of the system workqueue as well. I forgot to take ISRs sending an event into consideration.

      Thread groups (formerly task groups) APIs
      '''''''''''''''''''''''''''''''''''''''''
      These APIs are only able to operate on statically defined threads. The
      reason is that thread group operations are performed on an array of all
      threads looking for threads that are part of the group being operated
      on, not on a list of threads that are part of a group. And the reason
      for this is that threads can be part of multiple groups, so group
      ownership is implemented as a bitfield in the thread data structure. So
      on a group operation, all threads are looked at to see if they are part
      of the group being operated on.
  [AL] One potential issue with excluding dynamically defined threads
  from group operations is that it would prevent Zephyr from easily
  halting all non-SYS threads during debugging-type operations. This may
  or may not be a problem, depending on whether or not this capability
  is really needed in Zephyr. (Maybe this sort of thing was really only
  needed in the multi-node predecessor to Zephyr, and doesn't make sense
  now that we're only worried about debugging single node systems?)

This can be implemented another way if needed, with different queues.

      Alternatives
      ++++++++++++
      Instead of being in a array, threads could be all linked together and
      group operation could follow this list. However, this could have
      performance side-effects, such as cache misses, when jumping all over
      memory.
      Another solution, since all groups are defined at build-time, could be
      to create one list pointer per group in the threads' structure so that
      operations on a group would not have to search for which threads are
      part of the group.  This would be at the cost of having bigger thread
      structures when multiple groups exist. The cache misses would still
      happen, but this is no different than operations on any other type of
      lists.
      Microkernel FIFOs, pipes, memory pools and memory maps
      ''''''''''''''''''''''''''''''''''''''''''''''''''''''
      These objects need a buffer in addition to the object itself. For each
      of these object types, we provide a macro that computes the amount of
      space needed, including the object itself, to be used when creating an
      object at runtime.
      e.g.
      :code:`char my_pool[K_MEM_POOL_SIZE(min_block_size, max_block_size,
      num_max)];`
      Miscellaneous
      -------------
      The new APIs return generic negative errnos on failure and 0 on success,
      instead of the RC_ ones used by the microkernel.
      The timeout values passed to APIs are in seconds and nanoseconds,
      similar to :code:`struct timespec`, instead of ticks, to allow for a
      future fully tickless kernel. The granularity is in ticks for now
      however since the system clock is still tick-based.
      [Q: do we want a struct like timespec or use an int64_t in nanoseconds ?]
      Special timeout values are now :code:`K_NO_WAIT` and :code:`K_FOREVER`
      instead of :code:`TICKS_NONE` and :code:`TICKS_UNLIMITED`.
      .. note::
        The design of the tickless kernel is outside the scope of this document.
      Keep the old API (as deprecated) to ease transition
      ===================================================
      For at least one kernel version, all old APIs shall map to a new API.
      The old APIs are all mapped to the new implementations.
      For example:
      :c:func:`nano_task_sem_give`, :c:func:`nano_fiber_sem_give`,
      :c:func:`nano_sem_give`, :c:func:`task_sem_give`,
      :c:func:`fiber_sem_give` and
      :c:func:`isr_sem_give`
      are all aliases of
      :c:func:`k_sem_give`.
      We should probably deprecate the old APIs one or two kernel versions
      after the introduction of the new APIs to avoid confusion.
      Alternative
      '''''''''''
      Instead of aliases, do the mapping via macros in one header file that an
      application developer could himself maintain if they want to keep using
      the old API names once we stop supporting them ?

ARC

Note that for ARC the gcc mainline is not fully working as not all required patches are in yet. Current recommendation for gcc is to use the one from Synopsys github. Binutils from mainline are ok to use.

You can check the latest gcc for ARC mainline developments, provided as is, here: https://github.com/foss-for-synopsys-dwc-arc-processors/gcc/tree/dev

The stable port and officially supported version currently is 4.8.5, which can be found here: https://github.com/foss-for-synopsys-dwc-arc-processors/gcc/tree/arc-4.8-dev

How to build and other info: https://github.com/foss-for-synopsys-dwc-arc-processors/toolchain

If you have issues, let me know, or even better: open an issue using the github issues mechanism.

Regards,

Ruud. —–Original Message—– From: Lukasz Janyst [mailto:xyz@jany.st] Sent: Wednesday, May 25, 2016 1:27 PM To: devel@lists.zephyrproject.org Subject: [devel] building zephyr with mainline toolchain for Arduino 101

Hi there,

I have recently started playing with Zephyr and decided to see whether I can build it for Arduino 101 with the mainline toolchain. I compiled the following for both i586-none-elfiamcu and arc-none-elf targets:

* binutils master branch, 2.26.51 has messed up commandline parsing for arc (fixed in master) * gcc 6.1 * newlib 2.4.0 * gdb 7.11 (mainline for Intel, foss-for-synopsys-dwc-arc-processors for arc)

Things work well for Intel except for one minor glitch in newlib's 'strtold()'. However, for arc, I have encountered some issues with zephyr itself:

1) In various assembler files you use the 'j_s.nd [blink]' instruction. I could not find anywhere what it is supposed to do and mainline gas does not know what to do about it. When you look at the actual opcodes emitted by poky gas, it seems that what you meant is 'j_s [blink]':

https://jany.st/tools/zerobin/?0720d0b22d454d12#0isJEY6pqj4aMNiHL5tUOn627K/c746sR24Smx9qKL8=

Things work if I change that. I can send out a patch if there is an interest.

2) gcc 6.1 for arc seems to emit code containing traps calling 'abort()':

https://jany.st/tools/zerobin/?4b679a767b37a3a5#JBh80TwI4M3jahfRBLvWQvjVhRIDFnnB4AVEewXUTNY=

Zephyr does not provide the symbol, so the compilation fails. It seems that Linux provides 'abort()' for certain architectures:

http://lxr.free-electrons.com/source/arch/arm/kernel/traps.c#L756

I don't know enough about arc to provide a sensible implementation so suggestions would be appreciated. A dummy symbol makes the compilation pass.

zephyr.txt · Last modified: by admin

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki