Design Overview - RPMsg
From OMAPpedia
Contents |
[edit] Overview
Modern SoCs typically have multiple processor cores, with the type and number of processor cores deployed in a variety of combinations or Asymmetric MultiProcessing (AMP) configurations. Each of the processor cores may be running different instances of operating systems, like a High-Level Operating System (HLOS) such as Linux or any other proprietary Real-Time Operating System (RTOS), or simply a custom firmware images (on hardware accelerators).
OMAP4, for example, has a dual Cortex-A9, a dual Cortex-M3 and a C64x+ mini-DSP. Typically, the dual cortex-A9 is running Linux in a SMP configuration, and each of the other three cores (two M3 cores and a DSP) is running its own instance of a RTOS, SYS/BIOS in an AMP configuration.
AMP remote processors usually employ dedicated multimedia hardware accelerators, and therefore are often used to offload cpu-intensive multimedia tasks from the main application processor. These remote processors could also be used to control latency-sensitive sensors, drive arbitrary hardware blocks, or just perform background tasks while the main CPU is idling.
Users of those remote processors can either be userland apps (e.g. multimedia frameworks talking with their respective remote components like OMX or GStreamer) or kernel drivers (controlling hardware accessible only by the remote processor, reserving kernel-controlled resources on behalf of the remote processor, etc..).
Anyone wishing to use these remote processors require at the minimum two core functionalities - device management & a messaging framework. The device management portion is in general independent of the messaging framework and is only responsible for managing the state of the remote processor. The messaging framework, on the other hand, requires the remote processor to be up as a dependency, but the messaging protocol itself is independent. These two core functionalities are achieved in RPMsg by the remoteproc and rpmsg modules respectively.
The rest of this page explains the design details of these modules. Some of the details may be specific to the Android kernel version, and any design differences with the upstreaming effort will be discussed in the Open Source page.
The Kernel sources and SYS/BIOS sources can give an glimpse into how the below design features are partitioned/implemented (for the code monkeys who want to dig in directly) in different RPMsg files.
[edit] remoteproc
remoteproc is a generic kernel component managing remote processors and enables users access to these remote processors. The main functionalities implemented by remoteproc are Device Loading & Bootup, Power Management, Exception Management & Error Recovery.
remoteproc can be thought of as made up of two main sub-components, the first one - a generic module exposing a standard interface to enable the remote processor usage for any kernel client components, and the other a varying module that abstracts out the the platform-specific implementations. The platform specific implementations can vary from one device to another, and have the implementations catered to that particular device's features and functionalities. This split keeps the duplication part minimal when a new architecture/platform needs to be supported.
The generic remoteproc interfaces are explained in detail in the below remoteproc user API section below.
A remote processor can be brought up by any user module who attempts to get a handle to the remote processor. Bringing up a remote processor requires three main steps:
- Load an executable into the remote processor's memory (into portions of SDRAM dedicated for remote processor).
- Program the MMU associated with the remote processor matching the executable's virtual addresses to the actual physical addresses or pages.
- Bring the remote processor out of reset.
All these three steps are completed transparently to the user module by remoteproc. This greatly simplifies the configuration step from the perspective of a user module. Splitting and exporting these functionalities does not add much value and is burdensome. The first step is currently carried out by the generic remoteproc sub-component itself, and it mandates a minimal image format. The firmware image format and the exact loading procedure is explained in the firmware loading section. The second and third steps are implemented by the platform-specific implementation hooks as these steps involve programming of an MMU associated with the remote processor, and writing into device specific registers.
The generic remoteproc component also deals with the runtime device management of the remote processors. The main functionality is the integration with the core kernel PM framework for suspend & resume. remoteproc leverages the runtime_pm framework and as such exports interfaces that can be used to control the runtime PM state machine within the driver. remoteproc also provides other interfaces for setting constraints on behalf of users. These details are explained in the Power Management and Resource Manager sections below. The other important functionality provided by remoteproc is the debugfs infrastructure, useful to read out a variety of information from the kernel console.
The remoteproc module is dependent on the platform-specific implementations plugged into the generic remoteproc module for accomplishing different functionalities. The generic remoteproc module requires a particular set of hook functions to be implemented by each pluggable implementation. These are given below
struct rproc_ops { int (*start)(struct rproc *rproc, u64 bootaddr); int (*stop)(struct rproc *rproc); int (*suspend)(struct rproc *rproc, bool force); int (*resume)(struct rproc *rproc); int (*iommu_init)(struct rproc *, int (*)(struct rproc *, u64, u32)); int (*iommu_exit)(struct rproc *); int (*set_lat)(struct rproc *rproc, long v); int (*set_bw)(struct rproc *rproc, long v); int (*scale)(struct rproc *rproc, long v); int (*watchdog_init)(struct rproc *, int (*)(struct rproc *)); int (*watchdog_exit)(struct rproc *); void (*dump_registers)(struct rproc *); };
The start and stop hooks are responsible for controlling the remote processor's resets. The suspend and resume hooks are Power Management hooks for letting the platform-specific implementations check and perform any actions needed to put the remote processor devices into and out of a low-power state. The iommu_init and iommu_exit hooks are provided to enable the specific-implementation to configure and program the underlying MMUs. The set_lat, set_bw and scale hooks are provided for setting specific constraints. The watchdog_init, watchdog_exit hooks are required for configuring any Watchdog timers. The dump_registers is a hook function needed to dump out a remote processor registers and memory information in case of any exception/fatal error on the remote processor.
The OMAP platform-specific component is implemented as a platform driver. The platform-specific implementation would register themselves with the generic remoteproc layer using the remoteproc platform API explained below. This is mainly accomplished using the rproc_register and rproc_unregister API when the platform driver gets probed whenever the equivalent platform devices have been found. The platform driver is supplied with the device-specific data such as the remote processor name, the firmware file name, the DDR memory pools to be used for loading and running the remote processor code, the timers to be used to clock the remote processors (if any), and other power management related data like clock domain and hwmod names, suspend and idle addresses and masks. Some of this data, along with the hook function implementations, is provided to the generic remoteproc component in the registration process, while the remaining data is used by the platform-specific implementation itself. OMAP4 represents the remote processors in the system as two platform devices - one for DSP, and the other for the Cortex-M3 cores together.
Further design details are explained in the individual Functional Feature Design sections.
[edit] rpmsg
rpmsg is a virtio-based messaging bus that allows kernel drivers to communicate with remote processors available on the system. In turn, drivers could then expose appropriate user space interfaces, if needed.
Adding rpmsg support for a new platform is relatively easy; one just needs to register a VIRTIO_ID_RPMSG virtio device with the usual virtio_config_ops handlers. For simplicity, it is recommended to register a single virtio device for every physical remote processor we have in the system, but there are no hard rules, and this decision largely depends on the use cases, platform capabilities, performance requirements, etc.
Each virtio device serves as the device providing a physical processor-copy transport for Inter Processor Communication (IPC), upon which logical communication channels are constructed. Each of the virtio devices provide platform-specific data, like the message buffer size, number of messages in a virtio queues, the address location of these shared memory buffers.
OMAP4, e.g., registers two virtio devices to communicate with the remote dual Cortex-M3 processor, because each of the M3 cores executes its own OS instance. This way each of the remote cores may have different rpmsg channels, and the rpmsg core will treat them as completely independent processors (despite the fact that both of are part of the same physical device, and they are powered on/off together). The rpmsg messaging in OMAP4 uses shared memory partitioned as two uni-directional virtio vrings for communicating with each remote processor. The first one is used for RX and the second one for TX, with 512 messages of 512 bytes each on each virtio queue.
The rpmsg virtio driver would get probed for each of these virtio devices. The driver queries the devices for its platform data and creates the two virtio queues/rings, and associating all the transport buffer messages with the respective queue. The remote processors are started during this transport creation phase, and appropriate remoteproc notifiers are also registered.
The rpmsg bus provides the design infrastructure for supporting multiple client drivers and logical communication channels over the IPC transport. Every rpmsg device on the rpmsg bus is a logical communication channel with a remote processor (thus rpmsg devices are called channels). Channels are identified by a textual name and have a local ("source") rpmsg address, and remote ("destination") rpmsg address. Each rpmsg device/channel can be accessed through the correponding matching driver, that is referred to as a rpmsg client driver. The rpmsg virtio driver can create predefined static rpmsg channels associated with that particular virtio device during the probe phase, and can also create a special server channel. This server channel can be used by the remote processors to advertise any services by name for dynamic creation of channels.
When a driver starts listening on a channel, it binds it with a unique rpmsg src address (a 32 bits integer). This way when inbound messages arrive to this src address, the rpmsg core dispatches them to that driver (by invoking the driver's rx handler with the payload of the incoming message).
A rpmsg client driver can choose to create character devices and export interfaces to the userspace. When writing a driver that exposes rpmsg communication to userland, please keep in mind that remote processors may have direct access to the system's physical memory and/or other sensitive hardware resources (e.g. on OMAP4, some hardware accelerators do not have any MMUs and can have direct access to the physical memory, gpio banks, dma controllers, i2c bus, gptimers, mailbox devices, hwspinlocks, etc..). Moreover, those remote processors might be running RTOS where every task can access the entire memory/devices exposed to the processor. To minimize the risks of rogue or buggy userland code exploiting/triggering remote processor vulnerabilities, and by that taking over/down the system, it is often desired to limit userland to specific rpmsg channels it is allowed to send messages on, and if possible/relevant, minimize the amount of control it has over the content of the messages.
Currently, there are two rpmsg client drivers - rpmsg_omx & rpmsg_resmgr. rpmsg_resmgr is explained in the resource manager design section, and rpmsg_omx is explained below.
The following are some notable virtio implementation bits used in RPMsg design:
- virtio features
VIRTIO_RPMSG_F_NS should be enabled if the remote processor supports dynamic name service announcement messages.
Enabling this means that rpmsg device (i.e. channel) creation is completely dynamic; the remote processor announces the existence of a remote rpmsg service by sending a name service message (which contains the name and rpmsg addr of the remote service). This message is then handled by the rpmsg bus, which in turn dynamically creates and registers an rpmsg channel (which represents the remote service). If/when a relevant rpmsg driver is registered, it will be immediately probed by the bus, and can then start sending messages to the remote service.
- virtqueue's notify handler
Should inform the remote processor whenever it is kicked by virtio. OMAP4 uses its mailbox device to interrupt the remote processor, and inform it which virtqueue number is kicked (the index of the kicked virtqueue is written to the mailbox register). This allows the message to be delivered to the appropriate remote processor. This is because the Cortex-M3 cores in OMAP4 have two separate transport channels for communicating with the host processor, but only a single mailbox queue for getting the kick.
- virtio_config_ops's ->get() handler
The rpmsg bus uses this handler to request for platform-specific configuration values, such as number of message buffers, buffer sizes etc.
[edit] rpmsg-omx
rpmsg-omx is a rpmsg client driver used in OMAP4 to provide interfaces to the userspace, and integrate with OpenMax (OMX) framework. This is the driver that enables MultiMedia applications using OMX framework. Each of the remote processors in OMAP4 publishes the "OMX" service to the host processor on the name service channel. The rpmsg-omx driver gets probed for this service and creates the necessary character device for access from userspace, and exports standard open, close, read, write and ioctl functionalities on this device. The OMX service serves as a server on the remote processor for creating remote OMX components. Each OMX component in a multimedia application can create its remote component by connecting to this OMX service. A connection request creates a dedicated end-point/channel on the remote processor enabling the proxy component in the application to offload its task to the remote processor. The channel is destroyed simply when the handle to the rpmsg-omx driver is closed by each proxy OMX component.
Messages can be sent and received to the remote processors on these channels using simple read() and write() calls. The application layer is responsible for identifying and tying a response message to a particular request message. This is carried out today by the TI DOMX layer which abstracts this message matching and provides a shared library for common OMX API. Any multimedia buffers to be shared across the remote processors need to be authenticated and checked for in the rpmsg-omx driver, and cannot be passed blindly. This is necessary as the buffers are being sent from userspace and cannot be trusted. How the applications get the individual buffers is beyond the scope of RPMsg, but typically some sort of central buffer manager is expected in the overall stack. In Android 4.0 (IceCream), ION provides this functionality and rpmsg-omx validates each of the buffer handles with ION before passing the equivalent remote processor virtual address for the buffer to the remote OMX component.
[edit] Functional Features - Design
The following sub-section explain the individual designs of the various RPMsg features.
[edit] Firmware Loading
[edit] Firmware Image Loading
The generic remoteproc module does not export any specific interfaces for either of loading, starting or stopping the remote processor. All these functionalities are rather achieved transparently when a user module tries to a get a handle to a specific remote processor. The remote processors are currently loaded and started today first by the rpmsg messaging bus component, during the virtio device initialization. The rpmsg messaging bus acquires the remoteproc handle while configuring its transport (vrings), and uses this handle as described above in the rpmsg section.
The loading process itself uses the kernel's firmware class infrastructure. It requires the file system be configured for firmware support with proper udev or mdev rules. The remoteproc module triggers a non-blocking loading with the request_firmware_nowait function with the expected firmware file and the proper callback function. The loading and starting of the remote processor process is started when the callback function gets invoked with proper firmware image data.
[edit] Firmware Image Format
The remote processor image is maintained as a firmware binary within the filesystem. The format of the binary is chosen currently to be a simple custom format - this is done to keep the loader code very simple and leave out any image parsing code out of the kernel.
The following enums and structures define the binary format of the images remoteproc loads and boot the remote processors with. The general binary format is as follows:
struct { char magic[4] = { 'R', 'P', 'R', 'C' }; u32 version; u32 header_len; char header[...] = { header_len bytes of unformatted, textual header }; struct section { u32 type; u64 da; u32 len; u8 content[...] = { len bytes of binary data }; } [ no limit on number of sections ]; } __packed;
The image begins with a 4-bytes "RPRC" magic, a version number, and a free-style textual header that users can easily read. This textual header has generic information including version information. Please see the readrprc output utility for more details.
After the header, the firmware contains several sections that should be loaded to memory so the remote processor can access them. Every section begins with its type, device address (da) where the remote processor expects to find this section at (exact meaning depends on whether the device accesses memory through an MMU or not. If not, da might just be physical addresses), the section length and its content.
Most of the sections are either text or data (which currently are treated exactly the same), but there is one special "resource" section that allows the remote processor to announce/request certain resources from the host. The "resource" section needs to be first section in the image, so that it can supply the remoteproc with the memory information that the remote processor expects to use. There are also couple of other special sections - a "mmu" section and a "signature" sections which are geared to support secure applications, but are irrelevant for normal operation.
A resource section is just a packed array of the following struct:
struct fw_resource { u32 type; u64 da; u64 pa; u32 len; u32 flags; u8 name[48]; } __packed;
The way a resource is really handled strongly depends on its type. Some resources are just one-way announcements, e.g., a RSC_TRACE type means that the remote processor will be writing log messages into a trace buffer which is located at the address specified in 'da'. In that case, 'len' is the size of that buffer. A RSC_BOOTADDR resource type announces the boot address (i.e. the first instruction the remote processor should be booted with) in 'da'. Similarly, a RSC_CRASHDUMP type announces the specific device address at which the remote processor would dump out a crash information including its registers.
Other resources entries might be a two-way request/respond negotiation where a certain resource (memory or any other hardware resource) is requested by specifying the appropriate type and name. The host should then allocate such a resource and "reply" by writing the identifier (physical address or any other device id that will be meaningful to the remote processor) back into the relevant member of the resource structure. For e.g., a RSC_CARVEOUT resource type announces the memory regions that will be used by the image, and expects the device address 'da' to be mapped in at a specific 'pa' of length 'len'. If the 'pa' is zero, the remoteproc driver can allocate the desired amount of memory and map it in. The remote processor code would use the published 'pa' in the table if it requires any address translation. A RSC_DEVMEM resource type, on the other hand, is used for announcing the register memory 'da' for an IO device at the desired 'pa'. Both these resource types are useful for the remoteproc to construct its MMU table before releasing the remote processor from reset.
Obviously this approach can only be used _before_ booting the remote processor. After the remote processor is powered up, the resource section is expected to stay static. Runtime resource management (i.e. handling requests after the remote processor has booted) will be achieved using the dedicated rpmsg resource manager component.
Most likely this kind of static allocations of hardware resources for remote processors can also use Device Tree, but is currently out of scope and will be taken up during the device tree adaptation.
[edit] Memory Configuration
Memory Configuration is an important aspect of the RPMsg, especially during a remote processor configuration. Typically, the host processor is in charge of the SDRAM, and as such any memory required for the remote processors has to be allocated by the host processor. A remote processor typically deals only with virtual addresses, and need not have its entire virtual memory backed up by physical memory. Further, a remote processor image may have its different sections mapped at different virtual memories due to a number of reasons. For example, a Cortex M3 has a fixed memory map with different addresses on different buses; and a DSP can have the external memory start only from a certain range. Another reason is because of the way the memory attributes (like cacheability, executability, posted or non-posted etc) are defined for different memory regions.
In OMAP4, the Ducati and Tesla processor cores access memory through a MMU, while the MM hardware accelerators like IVA-HD or ISS access memory directly using physical addresses. Any buffers that would be given to IVA-HD or ISS by remote processors need to be contiguous in memory. As such, all the remote processor memory is deliberately kept to be contiguous to simplify the design. Since the processor cores are backed by MMUs, it is easy enough to map the individual pages into the memory that only the remote processors with an MMU can access.
The Ducati subsystem has two Cortex-M3 processor cores running its own executables, but are loaded as one using a single firmware image. Each of the executables is an ELF baseimage, and the linker needs a memory map file for placing different sections at appropriate places. This is achieved through a specific Platform file (Platform.xdc) which defines the Memory map for that particular processor core. The memory attributes are configured through a Unicache/Attribute-MMU h/w module, and is programmed through a configuration file. The memory attributes are set by the RTOS, SYS/BIOS during the image bootup.
The memory needs may vary depending on the image running on the remote processors, and as such need a flexible way of configuring and publishing how much memory is needed to the host-processor. Further, the linker need not be aware of any peripheral device memory, however the code would require this device memory to be programmed in its MMU for accessing that peripheral. This is achieved in RPMsg through the Resource Table. The Resource Table, as explained in the firmware image format section, specifies the required memory inputs through the RSC_CARVEOUT resource type and the device memory inputs through the RSC_DEVMEM resource type. Each firmware image requires a resource table, and so for Ducati, a single Resource Table combines both the cores' device and external memory requirements from the individual platform files. This Resource Table is compiled and incorporated into a baseimage, and shows up as a special "resource" firmware section in the firmware binary. Flexibility has been added on the RTOS-side to specify either fixed addresses or dynamic addresses through the 'pa' field in the RSC_CARVEOUT resource type entry.
Allowing the remote processors to specify the memory needs through a Platform file and the Resource Table minimizes the effort to match the memory configurations between the HLOS and the RTOS-sides. The HLOS-side is only responsible for allocating the contiguous carveout memories. This carveout memory needs to be set-aside even before the kernel boots to avoid the ARM aliasing properties, in case a different memory attribute needs to be assigned for a specific remote processor memory for the HLOS driver-code.
In Linux, the memory is currently being carved out using the memblock API during the board initialization. RPMsg currently supports two kinds of memory pools:
- Static Carveout Pool - Memory carved out at specific fixed locations. The size and addresses are set-aside based upon the definitions in the board initialization files.
- Dynamic Carveout Pool - Memory carved out only based on size, can be placed anywhere. Value is based upon existing memblock memories carved out already.
The board initialization sets aside these memories. The remoteproc platform devices queries this information and stores the information in its platform data. The remoteproc platform driver code publishes this information to the generic remoteproc component during the registration process. The generic remoteproc module uses this data while processing the resource tables, and either checks against the static carveout pool if a RSC_CARVEOUT type entry has the 'pa' specified or allocates memory from the dynamic carveout pool if the RSC_CARVEOUT type entry has a null/unassigned 'pa'. These checks ensure that the remote processor is utilizing an exclusive memory set-aside and is not corrupting any other kernel memory.
Note that this design is valid only on the Android 3.0 kernels, and is being redesigned for upstream. Details on the upstream design will be added soon to the Open Source page. The memory map page for Ducati demonstrates this with all the necessary code excerpts.
[edit] Resource Manager (IPC)
RPMsg enables the host processor to offload some of the CPU-intensive tasks to the remote processors. All these processor cores may have to share some peripherals for performing their tasks. Some of the peripherals or resources may be set-aside to be used exclusively by either the remote processors or the hardware accelerators that the remote processors interact with. These multitude of resources may be clubbed under different clock domains, voltage domains or power domains - to fine tune the power consumption. However, the host processor running the HLOS is responsible for managing and controlling the overall device power including configuring these different domains. As such, the remote processors have to request access to the needed peripherals from the host processor. They would also have to request certain constraints be put on different domains so that desired performance or power metrics can be achieved. For example, playing a video at 1080p resolution and 480p resolution can have different needs to achieve the desired performance, while not drawing too much power. The RPMsg Resource Manager component provides this interface between the remote and host processors.
The RPMsg Resource Manager comprises of an integral messaging entity, rpmsg_resmgr, built on top of the rpmsg bus; and a platform device/driver entity, rpres, that provides the necessary interfaces on the needed resources and integrates with the kernel PM framework. The following are the various peripherals/resources supported by Resource Manager in OMAP4:
- IVA-HD
- IVA-SEQ0 or iCont1
- IVA-SEQ1 or iCont2
- ISS
- SL2 Interface
- Face Detection module (FDIF)
- DSP processor core
- IPU ducati cores
- GP Timer
- Aux Clock
- Regulator
- GPIO
- SDMA
- I2C
The scope of some of the resources (like the hardware accelerators) is limited to only the remote processor sub-systems, and may not have dedicated kernel drivers to deal with them. Others like GPIOs, GPTimers do have specific kernel drivers as they are also used on the HLOS. All the subsytem-only resource requests are all handled by the rpres entities, while the requests for common peripherals are redirected to the appropriate drivers. Constraints are typically limited only to the processor cores and other devices (like the devices managed solely by rpres - FDIF, IVAHD) which have independent clocks or are in independent voltage or power domains.
The rpres entity ensures that each resource can be requested only once to simplify the design and resource usage on the RTOS-side. If a resource is already active, all subsequent requests are denied. The rpres entity is made up of a rpres platform driver that defines the public interfaces, and a platform-specific rpres_dev module that publishes all the remote processor resources. The rpres platform driver exports simple interfaces, rpres_get & rpres_put to get and release a valid resource manager object handle, and rpres_set_constraints to request and release constraints on these resources. The resource manager objects within rpres are created when the rpres platform driver gets probed with the equivalent rpres platform devices. The rpres_dev module builds the equivalent platform devices for all the necessary platform specific remote processor resources, and publishes all the resource-specific implementations as hooks in its platform data to the rpres platform driver.
The following are the rpres hook functions that need to be supported by the remote resources.
struct rpres_ops { int (*start)(struct platform_device *pdev); int (*stop)(struct platform_device *pdev); int (*set_lat)(struct platform_device *pdev, long v); int (*set_bw)(struct platform_device *pdev, long v); int (*scale_dev)(struct platform_device *pdev, long v); };
The start and stop hooks are mandatory and are responsible for starting and stopping the devices including turning on/off the necessary clocks and performing the unreset/reset. The OMAP4 specific implementation uses the underlying hwmod framework to manage the resources. The remaining three hooks - set_lat, set_bw and scale_dev are used for requesting and releasing the constraints per device. These hooks are heavily dependent on the remote resource device, and all of them need not be defined for all devices.
rpmsg-resmgr provides the messaging infrastructure over which all the requests from remote processors are serviced. rpmsg-resmgr is essentially a client driver sitting on the rpmsg bus. Static rpmsg-resmgr devices are created with fixed endpoints along with the rpmsg bus initialization, one for each of the Ducati processors. The rpmsg-resmgr client driver would get probed and will initialize the appropriate device objects and debugfs entries. These channels serve as servers for any incoming resource requests from the remote processors. The rpmsg-resmgr channels support specific messages for connecting & disconnecting on the channel, allocating and freeing resources, requesting and releasing constraints on particular resources. The rpmsg-resmgr ensures the security of the resources onto the remote processors by giving out special resource handles, and authenticating all incoming requests to match one of the allocated handles. The SYS/BIOS-side resource manager interfaces are provided through the IpcResource module. The software on the remote processor would connect to the appropriate static message channel, before passing any requests through. The IpcResource module provides the lowest-level interfaces for requesting resources. If there are multiple tasks or processes requesting the same resource on the SYS/BIOS-side, it is upto a higher level software module to manage the reference count and arbitration between the tasks for using the same resource.
[edit] Power Management
Remote processors typically are used only when an active usecase is running that utilizes the remote processors for offloading some of its processing tasks. Bringing up a remote processor every time from scratch, when a usecase is launched, would have to account for the time required for setting up the processor, and may adversely affect user experience. The remote processors should therefore be powered up preferably during boot-up time and idled when inactive. Idling of these remote processors can be achieved only through proper Power Management design. Saving the power is especially critical for OMAP4 since the Cortex M3 cores reside in the CORE power domain.
The remoteproc module in RPMsg is responsible for the power management of the different remote processor devices. remoteproc supports the following PM functionalities:
- System Suspend/Resume - suspend device based on an user input to suspend the OMAP device
- Runtime Device Suspend/Resume - suspend only the specific device (remote processor) based on its idle/autosuspend timeout
The generic remoteproc module implements the common functionality required for the core power management interfaces for suspend, resume, runtime_suspend & runtime_resume. The platform-specific implementation publishes these generic interfaces to the Linux kernel's PM framework through the platform driver registration. The platform-specific implementation also provides the actual functionality for the rproc_ops' PM specific hooks suspend() and resume() registered with the generic remoteproc module. These hooks are invoked automatically whenever the driver's PM functions are executed. The remoteproc power management functionality is actually performed in the equivalent runtime functions, and the system suspend/resume interfaces merely reuse the runtime pm implementations - this simplifies the complexity of the synchronization that the remoteproc module has to perform.
remoteproc is only responsible for device management and has no knowledge of any communication aspects, and it has to synchronize with the messaging component, rpmsg, to check the feasibility of lowering the device power. The runtime aspect ensures that the lower power state is attempted only after the device inactivity/auto-suspend time has expired. However, remoteproc still has to get the necessary confirmation from the rpmsg layer, as there is a possibility that there are pending messages in the transport that have not been processed yet. This is accomplished through the notifier callback registration for different remoteproc device events including three PM events - pre-suspend, post-suspend and resume. The pre-suspend event allows the rpmsg bus to check if there are any pending messages and to acknowledge or cancel a autosuspend. This check is not required in the case of a system suspend, in which all applications would have already been notified and suspended.
The OMAP specific implementation is responsible for checking if the corresponding remote processor is ready to be suspended and if ready (only for runtime autosuspend) request the remote processor to save any of its context. The Cortex M3 processors when idle execute WFI, and when both the cores are in WFI mode with the Cortex M3's Deep Sleep bit set, the PRCM hardware module is notified and the module is put into standby status. The remote processors may have their certain private resources like internal RAM that may loose context and these need to be saved before the remote processors can be fully suspended. Once the standby status is confirmed, a special mailbox message is sent to the remote processor to initiate the context saving. The two Cortex M3 cores internally synchronize between themselves, and save the context of internal resources like L2RAM, the Unicache/AMMU registers etc, and signal a ready mask in shared memory. The OMAP remoteproc implementation completes the suspend process based on the readiness of this status flag, and if not ready within a certain timeout, will reject the suspend request. The remote processors would then be put into reset to complete the suspend.
Once the remote processors have been suspended properly, the rpmsg module is notified through the remoteproc post-suspend notifier callback allowing it to release its handle to mailbox device, allowing this peripehral also to be shutdown.
The resume process would simply release the reset of the remote processors, and the SYS/BIOS code is intelligent to detect the previously saved context and restore the processor back to its previous executing state. The autosuspend timeout is refreshed every time there is a communication message sent to the remote processor. The current default autosuspend time used for OMAP4 Ducati is 5 seconds, but this can be configured through the sysfs entry at runtime.
[edit] Tracing
Remote Processor Tracing provides very useful insight into what a remote processor is executing, and provides first-level of debug information without having to connect a JTAG and perform stop-mode debugging. While it is possible to render these traces directly from the remote processors using dedicated peripherals (like UART or Ethernet), this is usually not practical due to the limited number of these peripherals. The traces therefore need to be communicated to the host processor, which can leverage its logging capabilities and output the traces. Sending traces over messaging transports will mean that the overall messaging bandwidth has to be shared and reduce the effective bandwidth for essential IPC. The remote processors in OMAP have access to shared memory, and the RPMsg Tracing design leverages this for efficient IPC bandwidth.
The RPMsg Tracing uses a circular buffer per remote processor and a shared variable at the end of this trace buffer for the current write pointer in this shared memory. The location and size of the trace buffer is published to the host processor through the resource table. The traces are exported through a debugfs file. Details of this are given in the debugging page in the tracing section. The traces are printed by simply doing a read of the debugfs file. The tracing currently is not continuous and only gives a snapshot of the current trace buffer area. The wrap-around in the trace buffer is properly accounted for while giving out the trace output. Enhancement of this functionality is currently in progress to print the traces continuously.
The SYS/BIOS-side code uses a customized version of the XDC Runtime SysMin module. The module has been enhanced to print a timestamp with each trace print ending in a newline. The timestamp is printed in seconds and has a resolution of the BIOS tick period. The BIOS tick period is configurable, and depends on the Timer used for sourcing the BIOS clock. This timestamp gives out the traces only relative to execution time on the remote processor-side but has no direct correlation to the execution on the host processor. The SysMin module has also been enhanced to provide this additional write pointer for tracking by the reader on the host-processor, and not clearing the shared buffer whenever a System_flush() is called. The write pointer is an indicator for the number of overall characters written at any point of time, and indirectly points to the current write index in the circular buffer.
The trace region is reconfigured if a remote processor has been restarted due to a fatal error or exception. The Tracing design backs up the current trace contents into another debugfs file in such a case. This facilitates to still see traces from a previous run to aid debugging.
The size of the circular buffer is configurable (within certain allowed limits), and an application developer can choose an appropriate value based on the desired trace depth and the trace rate. Dedicated memory has to be set-aside for these tracing buffers and has also to be accounted for in the overall carve out memory reserved at boot-time for the remote processor. The trace buffers are assigned dedicated memory region in the remote processor's Platform memory definition file (Platform.xdc), and the size of this region limits how much the circular buffer can be. For eg., the current OMAP4 platform file has a region size of 0x60000 bytes while the configured trace buffer is 0x8000 bytes. Care must be taken to give the region and trace buffer sizes, if customizing, as a multiple of a physical page.
[edit] Exception Management
The remote processors code, if not written properly, would cause a variety of exceptions or crashes like any other software. While the remote processor tracing can give diagnostic traces on the general code flow, it may not be enough to provide accurate information about crashes. Further, any active usecases would experience a disruption in their execution. They should be able to gracefully errored out, and be able to reuse the remote processors. The remoteproc provides the general infrastructure for different exception types and be able to dump out useful information for debugging. The rpmsg bus has originally started the remote processors, and also performs the recovery while also cleaning up all the rpmsg client drivers.
The crashes on remote processors can be classified into three main types:
- MMU Faults - A remote processor MMU is unable to fetch an instruction or data address requested by the processor
- WatchDog Errors - The code on remote processor is stuck in a loop, and unable to perform any scheduling of other tasks
- Internal Exceptions - There can be some internal exceptions that are not exported outside of the remote processor subsystem
The following sections describe how the error notifications work along with the dump of the necessary exception information in OMAP4.
[edit] MMU Faults
The remote processor MMUs in OMAP have 32 TLBs, and can cover 4GB of address space. Hardware-assisted Table Walking Logic is also supported through L1 and L2 Page Table Entry (PTE) Tables. The L1 Table needs to be physically contiguous. These MMUs can generate an interrupt to the host processor on a variety of faults including a TLB Miss (useful only when Table Walking Logic is not enabled), or a Translation Fault (PTE not found). The platform-specific remoteproc implementation manages the programming of these MMUs using an iommu object, which is not exposed externally to keep the design generic. The generic remoteproc module registers a MMU Fault handler with the iommu driver through the platform-specific iommu initialization hook. The handler function is responsible for using the relevant remoteproc infrastructure to dump out the relevant crash information, and informing any users through notifier callbacks registered with the remoteproc module.
The crash information is generated by the remote processors themselves into a shared memory location. This generation is triggered slightly differently in the Cortex-M3 core and the DSP. There are two special registers in the MMUs to aid the MMU Fault debugging. The register MMU_FAULT_AD can give out the exact PC instruction in DSP that has caused the MMU Fault. This is achieved in hardware in DSP (only in OMAP4 and beyond, not available in OMAP3), and is addressed slightly differently in the Cortex-M3 cores. For M3, an internal bus error response is sent upon an MMU Fault and this is possible only when the MMU_GP_REG register is programmed properly. This causes the MMU Fault to generate an internal exception to the M3 core. SYS/BIOS provides the necessary exception handler implementations and can dump the processor registers and other information such as the executing task handle, its stack pointer and stack size into a user-provided buffer. This exception buffer is published to the remoteproc through the Resource Table.
Please look through the Memory Management Units chapter in the OMAP TRM for more details on the above registers.
[edit] WatchDog
The remote processors are running SYS/BIOS, a RTOS that provides a simple scheduler based on hierarchical priorities. It is possible that a particular task may be running a busy loop and not yield the processor to execute other tasks. A Watchdog timer is used to detect this. There are no dedicated hardware watchdog timers for Cortex M3 cores in OMAP, and two General Purpose Timers, GPT9 & GPT11 are used for detecting watchdog on each of the M3 cores. Like in the case of MMU Faults, the platform-specific remoteproc implementation manages these timers and an interrupt is generated to the host processor when the corresponding watchdog timer expires. The interrupt executes a WatchDog error handler function registered by the generic remoteproc module through the platform-specific watchdog initialization hook. The handler function levearages the same remoteproc infrastructure as the MMU fault handler to dump out the relevant crash information and notify remoteproc users.
The WatchDog timer period is refreshed continuously on the remote processors by plugging in hooks into the SYS/BIOS scheduler. SYS/BIOS supports hook functions when switching Tasks or beginning a Swi. The SYS/BIOS has an Idle Task which is the lowest priority task, and is run when there are no active tasks. Both these hook functions and the Idle task would refresh the watchdog timer, postponing the interrupt/watchdog event as long as the scheduler is actively running and switching tasks.
The crash information itself is generated by plugging in the SYS/BIOS exception handler implementation with the interrupt associated with the timers on the M3-cores. These then dump the necessary crash information.
[edit] Remote Processor Exceptions
All the exceptions on the remote processors do not trigger an event/interrupt on the host processor directly. There are a number of internal core exceptions that generate an interrupt only to the specific remote processor. The Cortex-M3 core has about 16 internal interrupt events including various exception events like Non-maskable interrupt (NMI), Bus Fault, Usage Fault (refer to Exceptions chapter in the Cortex-M3 TRM for further information). All memory accessed by the remote processors in OMAP have to be defined in an Attribute-MMU (AMMU), and the AMMU also generates a XLATE_MMU_FAULT interrupt to the core. The SYS/BIOS exception handler implementations are hooked to these exception interrupts and dump out the processor registers and other information such as the executing task handle, its stack pointer and stack size into a user-provided buffer.
These exceptions are notified to the host processor by sending a special mailbox message to the rpmsg messaging bus layer. The rpmsg passes on this notification to the remote proc, which then proceeds to perform a crash dump and recovery.
[edit] Error Recovery
remoteproc is responsible for only the device management of the remote processors, and provides interfaces to init/deinit a core. rpmsg component is the first client user of remoteproc today, and is responsible for causing a remote processor to be started and stopped. The remoteproc infrastructure provides notifier registration for different events, and all the above exception events notify the remoteproc users with an ERROR event. The rpmsg layer then cleans up after itself and deletes any existing devices. This in turn calls the remove functions of the various drivers hanging on the rpmsg bus. Any userspace operations on these client rpmsg drivers are errored out with a specific error allowing the userspace applications to gracefully clean up after themselves. The rpmsg layer recreates the devices, messaging transports and in the process restart/re-initialize a remote processor. Any rpmsg client drivers are re-probed, allowing them to export the driver interface. Synchronization in the rpmsg and remoteproc layers restrict the userspace applications from using a remote processor while the recovery is in progress.
This RPMsg design utilizes the bus infrastructure and driver model in the Linux kernel for performing the error recovery, and keeps the design simple while avoiding race conditions between open applications and drivers.
[edit] Secure Playback
The Secure Playback use-case requires certain design functionality within the remoteproc code. This feature is specific only to the Android 3.0 Icecream kernel and the Ducati remote processors, and is not applicable otherwise. The full flow of the Secure Playback usecase is beyond the scope of RPMsg, and this section only discusses the pertinent design details with respect to remoteproc.
The Secure Playback usecase requires that the code running on Ducati can be trusted. During normal usage, any arbitrary code can/might be running on the Ducati, and as such the state of Ducati cannot be trusted, when launching a secure playback application. Further, during normal operation, the iommu driver is pretty much open for allowing other entries to be programmed into the MMU. To enable the secure playback usecase, the remote processor is forced to shut down, and all existing applications using the remote processor are errored out. OMAP4 Android kernel uses only static carveout pools in order to support this secure playback application. To secure Ducati, firewalls would need to be setup before releasing the remote processor from reset, and this mandates that fixed addresses be used. The remote processor then reconfigures the MMU with a set of known MMU entries using this predetermined carveout memory, and the iommu driver is configured to error out any incoming programming requests. The remote processor image is then restarted after authenticating the firmware image in DDR with a predetermined certificate key and setting up the firewalls. The firewalling of the DDR ensures that no other component can gain access to the memory without generating a secure violation. This ensures the code running on the Ducati can be trusted, as well as protect against anyone snooping on the decoded buffers. When the secure application terminates, the firewalls are torn down and Ducati is reconfigured and reloaded again in normal mode after removing and reprogramming the iommu driver in normal mode. The switch between the two modes is triggered by the MM layers when the input stream is detected as a secure stream, and the secure mode is maintained and checked against during the decryption phase for every frame in the relevant layers.
[edit] User Interfaces
RPMsg, by itself, doesn't have any specific userspace interfaces currently. The following sub-sections give the kernel API for remoteproc and rpmsg modules. The userspace interfaces can be provided by individual rpmsg client drivers, if desired.
[edit] remoteproc
[edit] User API
The User API is intended to be used for accessing a remoteproc device.
- rproc_get - Get a handle to a remote processor instance
struct rproc *rproc_get(const char *name);
Power up the remote processor, identified by the 'name' argument, and boot it. If the remote processor has not been initialized before, load the proper firmware image, program it and power up the core. If the remote processor is already powered on, the function immediately succeeds. On success, returns the rproc handle. On failure, NULL is returned.
- rproc_put - Release the handle to the remote processor
void rproc_put(struct rproc *rproc);
Power off the remote processor, identified by the rproc handle. Every call to rproc_get() must be (eventually) accompanied by a call to rproc_put(). Calling rproc_put() redundantly is a bug. Note: the remote processor will actually be powered off only when the last user calls rproc_put().
- rproc_set_constraints - Set specific constraints on a remote processor device
int rproc_set_constraints(struct rproc *rproc, enum rproc_constraint type, long v)
This interfaces allows a user module to request specific constraints like frequency, bandwidth or latency on a particular remote processor. This API interacts with the underlying Power Management framework API in setting/removing the different constraints.
- rproc_last_busy - Mark the last time remote processor is accessed
void rproc_last_busy(struct rproc *rproc);
Should be called by the messaging framework to inform the remoteproc module that a message is being sent to the remote processor. This will help the remoteproc module to be notified that the remote processor is going to get used and reschedule its power management routines.
- rproc_error_notify - Notify of a remote processor error
int rproc_error_notify(struct rproc *rproc);
Should be called by the messaging framework to inform the remoteproc module that the remote processor is going down due to an error on the remote processor. This will enable the remoteproc module to take a dump of any remote processor registers/memory, inform any connected users, and recover the remote processor by bringing it up back again.
- rproc_set_secure - Transition the remote processor into and out of secure state
int rproc_set_secure(const char *name, bool enable);
This API is intended for enabling secure applications (like DRM Content Players/Renderers) to use the remote processors, and lock out normal applications. This interface allows a platform-specific miscallaneous module or driver to request that a particular remote processor be either transferred from a normal state or secure state to the opposing secure state or normal state. The remote processor will be reconfigured, reprogrammed and restarted with proper firewalls setup or destroyed depending on the target state. The miscalleneous module is responsible for presenting userland its own interfaces.
[edit] Platform API
The Platform API is API that would be used by platform implementors for plugging in their devices into the generic remoteproc framework.
- rproc_register - Register a particular platform implementation with remoteproc
int rproc_register(struct device *dev, const char *name, const struct rproc_ops *ops, const char *firmware, struct rproc_mem_pool *memory_pool, struct module *owner, unsigned sus_timeout);
Should be called from the underlying platform-specific implementation, in order to register a new remoteproc device. 'dev' is the underlying device, 'name' is the name of the remote processor, which will be specified by users calling rproc_get(), 'ops' is the platform-specific handlers for various operations, 'firmware' is the name of the firmware file to boot the processor with, 'memory_pool' is a table of assigned memory pools from which the remote processor is dedicated memory for usage, 'owner' is the underlying module that should not be removed while the remote processor is in use.
Returns 0 on success, or an appropriate error code on failure.
- rproc_unregister - Register a particular platform implementation with remoteproc
int rproc_unregister(const char *name);
Should be called from the underlying platform-specific implementation, in order to unregister a remoteproc device that was previously registered with rproc_register().
- rproc_event_register - Register a notifier callback function for rproc events
int rproc_event_register(struct rproc *, struct notifier_block *nb);
This interface allows a user of remoteproc to register a generic callback notification function for different events occurring on the remote processor. To simplify the interface, it is left to the user to discern and discard the different event types and process them using its own discretion. The remoteproc module simply calls all the notifiers in the order of registration. This particular interface is used within the rpmsg bus implementation to get notified.
- rproc_event_unregister - Unregister a notifier callback function for rproc events
int rproc_event_unregister(struct rproc *, struct notifier_block *nb);
Should be called in order to unregister a notifier callback function on a particualr remoteproc device that was previously registered with rproc_event_register().
Please look up the corresponding include/linux/remoteproc.h in your kernel for other type definitions and comments.
[edit] rpmsg
[edit] User API
- rpmsg_create_ept - Create a communication end point
struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_channel *rpdev, void (*cb)(struct rpmsg_channel *, void *, int, void *, u32), void *priv, u32 addr);
Every rpmsg address in the system is bound to an rx callback (so when inbound messages arrive, they are dispatched by the rpmsg bus using the appropriate callback handler) by means of an rpmsg_endpoint struct.
This function allows drivers to create such an endpoint, and by that, bind a callback, and possibly some private data too, to an rpmsg address (either one that is known in advance, or one that will be dynamically assigned for them).
Simple rpmsg drivers need not call rpmsg_create_ept, because an endpoint is already created for them when they are probed by the rpmsg bus (using the rx callback they provide when they registered to the rpmsg bus).
So things should just work for simple drivers: they already have an endpoint, their rx callback is bound to their rpmsg address, and when relevant inbound messages arrive (i.e. messages which their dst address equals to the src address of their rpmsg channel), the driver's handler is invoked to process it.
That said, more complicated drivers might do need to allocate additional rpmsg addresses, and bind them to different rx callbacks. To accomplish that, those drivers need to call this function. Driver should provide their channel (so the new endpoint would bind to the same remote processor their channel belongs to), an rx callback function, an optional private data (which is provided back when the rx callback is invoked), and an address they want to bind with the callback. If addr is RPMSG_ADDR_ANY, then rpmsg_create_ept will dynamically assign them an available rpmsg address (drivers should have a very good reason why not to always use RPMSG_ADDR_ANY here).
Returns a pointer to the endpoint on success, or NULL on error.
- rpmsg_destroy_ept - Destroy or Delete an existing communication end point
void rpmsg_destroy_ept(struct rpmsg_endpoint *ept);
Destroys an existing rpmsg endpoint. User should provide a pointer to an rpmsg endpoint that was previously created with rpmsg_create_ept().
- register_rpmsg_driver - Register a rpmsg client driver with rpmsg bus
int register_rpmsg_driver(struct rpmsg_driver *rpdrv);
Registers an rpmsg driver with the rpmsg bus. User should provide a pointer to an rpmsg_driver struct, which contains the driver's ->probe() and ->remove() functions, an rx callback, and an id_table specifying the names of the channels this driver is interested to be probed with.
- unregister_rpmsg_driver - Unregister a rpmsg client driver with rpmsg bus
void unregister_rpmsg_driver(struct rpmsg_driver *rpdrv);
Unregisters an rpmsg driver from the rpmsg bus. User should provide a pointer to a previously-registerd rpmsg_driver struct. Returns 0 on success, and an appropriate error value on failure.
- rpmsg_send - Send a message on a rpmsg device.
int rpmsg_send(struct rpmsg_channel *rpdev, void *data, int len);
Sends a message across to the remote processor on a given channel. The caller should specify the channel, the data it wants to send, and its length (in bytes). The message will be sent on the specified channel, i.e. its source and destination address fields will be set to the channel's src and dst addresses.
In case there are no TX buffers available, the function will block until one becomes available (i.e. until the remote processor will consume a tx buffer and put it back on virtio's used descriptor ring), or a timeout of 15 seconds elapses. When the latter happens, -ERESTARTSYS is returned. The function can only be called from a process context (for now). Returns 0 on success and an appropriate error value on failure.
- rpmsg_sendto - Send to a specific endpoint tied on a rpmsg device
int rpmsg_sendto(struct rpmsg_channel *rpdev, void *data, int len, u32 dst);
Sends a message across to the remote processor on a given channel, to a destination address provided by the caller. The caller should specify the channel, the data it wants to send, its length (in bytes), and an explicit destination address. The message will then be sent to the remote processor to which the channel belongs to, using the channel's src address, and the user-provided dst address (thus the channel's dst address will be ignored).
In case there are no TX buffers available, the function will block until one becomes available (i.e. until the remote processor will consume a tx buffer and put it back on virtio's used descriptor ring), or a timeout of 15 seconds elapses. When the latter happens, -ERESTARTSYS is returned. The function can only be called from a process context (for now). Returns 0 on success and an appropriate error value on failure.
- rpmsg_send_offchannel - Send a message to a specific endpoint from a specific endpoint
int rpmsg_send_offchannel(struct rpmsg_channel *rpdev, u32 src, u32 dst, void *data, int len);
Sends a message across to the remote processor, using the src and dst addresses provided by the user. The caller should specify the channel, the data it wants to send, its length (in bytes), and explicit source and destination addresses. The message will then be sent to the remote processor to which the channel belongs to, but the channel's src and dst addresses will be ignored (and the user-provided addresses will be used instead).
In case there are no TX buffers available, the function will block until one becomes available (i.e. until the remote processor will consume a tx buffer and put it back on virtio's used descriptor ring), or a timeout of 15 seconds elapses. When the latter happens, -ERESTARTSYS is returned. The function can only be called from a process context (for now). Returns 0 on success and an appropriate error value on failure.
- rpmsg_trysend - Non-blocking rpmsg_send function
int rpmsg_trysend(struct rpmsg_channel *rpdev, void *data, int len);
Sends a message across to the remote processor on a given channel. The caller should specify the channel, the data it wants to send, and its length (in bytes). The message will be sent on the specified channel, i.e. its source and destination address fields will be set to the channel's src and dst addresses.
In case there are no TX buffers available, the function will immediately return -ENOMEM without waiting until one becomes available. The function can only be called from a process context (for now). Returns 0 on success and an appropriate error value on failure.
- rpmsg_trysendto - Non-blocking rpmsg_sendto function
int rpmsg_trysendto(struct rpmsg_channel *rpdev, void *data, int len, u32 dst);
Sends a message across to the remote processor on a given channel, to a destination address provided by the user. The user should specify the channel, the data it wants to send, its length (in bytes), and an explicit destination address. The message will then be sent to the remote processor to which the channel belongs to, using the channel's src address, and the user-provided dst address (thus the channel's dst address will be ignored).
In case there are no TX buffers available, the function will immediately return -ENOMEM without waiting until one becomes available. The function can only be called from a process context (for now). Returns 0 on success and an appropriate error value on failure.
- rpmsg_trysend_offchannel - Non-blocking rpmsg_send_offchannel function
int rpmsg_trysend_offchannel(struct rpmsg_channel *rpdev, u32 src, u32 dst, void *data, int len);
Sends a message across to the remote processor, using source and destination addresses provided by the user. The user should specify the channel, the data it wants to send, its length (in bytes), and explicit source and destination addresses. The message will then be sent to the remote processor to which the channel belongs to, but the channel's src and dst addresses will be ignored (and the user-provided addresses will be used instead).
In case there are no TX buffers available, the function will immediately return -ENOMEM without waiting until one becomes available. The function can only be called from a process context (for now). Returns 0 on success and an appropriate error value on failure.
Please look up the corresponding include/linux/rpmsg.h in your kernel for other type definitions and comments.
[edit] Examples
[edit] remoteproc - ipu client
The following example demonstrates a typical usage for a remote processor client, who wants to use the remote processor "ipu" in OMAP4
#includeint dummy_rproc_example(void) { struct rproc *my_rproc; /* let's power on and boot the image processing unit */ my_rproc = rproc_get("ipu"); if (!my_rproc) { /* * something went wrong. handle it and leave. */ } /* * the 'ipu' remote processor is now powered on, and we have a * valid handle.... let it work ! */ /* if we no longer need ipu's services, power it down */ rproc_put(my_rproc); }
If this is the first client using the remote processor "ipu", the remote processor is loaded with the firmware image registered by the corresponding platform-specific implementation and is started.
[edit] rpmsg - simple rpmsg client driver
The following is a simple rpmsg driver, that sends an "hello!" message on probe(), and whenever it receives an incoming message, it dumps its content to the console.
#include#include #include static void rpmsg_sample_cb(struct rpmsg_channel *rpdev, void *data, int len, void *priv, u32 src) { print_hex_dump(KERN_INFO, "incoming message:", DUMP_PREFIX_NONE, 16, 1, data, len, true); } static int rpmsg_sample_probe(struct rpmsg_channel *rpdev) { int err; dev_info(&rpdev->dev, "chnl: 0x%x -> 0x%x\n", rpdev->src, rpdev->dst); /* send a message on our channel */ err = rpmsg_send(rpdev, "hello!", 6); if (err) { pr_err("rpmsg_send failed: %d\n", err); return err; } return 0; } static void __devexit rpmsg_sample_remove(struct rpmsg_channel *rpdev) { dev_info(&rpdev->dev, "rpmsg sample client driver is removed\n"); } static struct rpmsg_device_id rpmsg_driver_sample_id_table[] = { { .name = "rpmsg-client-sample" }, { }, }; MODULE_DEVICE_TABLE(rpmsg, rpmsg_driver_sample_id_table); static struct rpmsg_driver rpmsg_sample_client = { .drv.name = KBUILD_MODNAME, .drv.owner = THIS_MODULE, .id_table = rpmsg_driver_sample_id_table, .probe = rpmsg_sample_probe, .callback = rpmsg_sample_cb, .remove = __devexit_p(rpmsg_sample_remove), }; static int __init init(void) { return register_rpmsg_driver(&rpmsg_sample_client); } module_init(init); static void __exit fini(void) { unregister_rpmsg_driver(&rpmsg_sample_client); } module_exit(fini);
The sample driver is automatically probed when the remote processor publishes a service "rpmsg-client-sample". The driver receives any messages sent to it through the rpmsg_sample_cb registered with the rpmsg bus.