Difference between revisions of "BeagleBoard/GSoC/2022 Propasal/Bela audio driver for common audio APIs"

From eLinux.org
Jump to: navigation, search
(Bela Cape and ALSA interface)
(Bela Cape and ALSA interface)
 
(One intermediate revision by the same user not shown)
Line 152: Line 152:
 
===== User space =====
 
===== User space =====
 
To call read() and write() functions without the system calls , there has to be a generic user space alsa device , because Bela Core cannot call into Linux kernel. To write a user space driver , basic steps are : open the UIO device (so it is ready to use), get size of memory region, map the device registers, unmap, then write the functions needed (select() , read(), write() , mmap()).
 
To call read() and write() functions without the system calls , there has to be a generic user space alsa device , because Bela Core cannot call into Linux kernel. To write a user space driver , basic steps are : open the UIO device (so it is ready to use), get size of memory region, map the device registers, unmap, then write the functions needed (select() , read(), write() , mmap()).
 +
 +
[[File:Userspace.png| .]]
  
 
To connect to the driver, Xenomai with Cobalt core has basic device I/O such as open() and close(), ioctl() wrapped under a more user friendly API for developers in user space. So ,  will make use of the ioctl() to call read() [#define "ioctl name" __IOR] and write() [#define "ioctl name" __IOW] .As ioctl() is useful for implementing a device driver to set the configuration on the device.   
 
To connect to the driver, Xenomai with Cobalt core has basic device I/O such as open() and close(), ioctl() wrapped under a more user friendly API for developers in user space. So ,  will make use of the ioctl() to call read() [#define "ioctl name" __IOR] and write() [#define "ioctl name" __IOW] .As ioctl() is useful for implementing a device driver to set the configuration on the device.   
Line 163: Line 165:
 
At present , there are separate audio drivers for [https://github.com/supercollider/supercollider/blob/develop/server/scsynth/SC_Bela.cpp#L49 Supercollider] and [https://github.com/csound/csound/blob/master/Bela/main.cpp#L290 Csound],to call into Bela’s C API. Referring to these , I'd have an idea of how to write the ALSA plugins. BELA system operates and utilizes the ARM CPU and the PRU unit. The PRU shuttles data between the hardware and a memory buffer; the Xenomai audio task then processes data. This data interupts ARM. Bela is based on Linux with the Xenomai realtime kernel extensions.  
 
At present , there are separate audio drivers for [https://github.com/supercollider/supercollider/blob/develop/server/scsynth/SC_Bela.cpp#L49 Supercollider] and [https://github.com/csound/csound/blob/master/Bela/main.cpp#L290 Csound],to call into Bela’s C API. Referring to these , I'd have an idea of how to write the ALSA plugins. BELA system operates and utilizes the ARM CPU and the PRU unit. The PRU shuttles data between the hardware and a memory buffer; the Xenomai audio task then processes data. This data interupts ARM. Bela is based on Linux with the Xenomai realtime kernel extensions.  
  
[[File:Bela_description.jpeg| The overview of the system.]]
+
[[File:Bela_description.jpeg| ]]
  
 
The programs will interface with the virtual ALSA devices created by means of plugins and still utilize the Xenomai threads for data transfers. To test, I'll first run few examples of ALSA plugins using ALSA API. The API provides initialisation and cleanup functions in which the programmer can allocate and free resources. Using the following commands in linux to play the audio and play the text as audio :   
 
The programs will interface with the virtual ALSA devices created by means of plugins and still utilize the Xenomai threads for data transfers. To test, I'll first run few examples of ALSA plugins using ALSA API. The API provides initialisation and cleanup functions in which the programmer can allocate and free resources. Using the following commands in linux to play the audio and play the text as audio :   

Latest revision as of 10:29, 18 April 2022


[Bela audio driver for common audio APIs]

About Student: Marck Koothoor
Mentors: Giulio Moro
Code: not yet created!
Wiki: [N/A]
GSoC: Proposal Request

Status

Proposal review.

About you

IRC: Marck Koothoor
Github: https://github.com/marck3131
School: Veermata Jijabai Technological Institute (VJTI)
Country: India
Primary language: English , Hindi , Marathi
Typical work hours: 10AM-7PM IST
Previous GSoC participation: Participating first time in GSoC, I'm interested in embedded as I've experience working with ESP32, ESP8266,ArduinoUNO. Looking forward to work around audio drivers.

About your project

Project name: Bela audio driver for common audio APIs

Description

BELA is an open-source embedded computing platform for creating responsive, real-time interactive systems with audio and sensors. It features ultra-low latency, high-resolution seneor sampling, a convenient and powerful browser-based IDE, and a fully open-source toolchain that includes support for both low-level languages like C/C++ and popular computer music programming languages like Pure Data, SuperCollider and Csound. There are two types of Bela system: the original Bela, and Bela Mini. Both are open-source hardware systems, and are based on the Beagle single-board computers (Bela uses the BeagleBone Black, and Bela Mini uses the Pocketbeagle). The Bela software extends the functionality of the Beagle systems by integrating audio processing and sensor connectivity in a single, high-performance package.

Goal

The main purpose of the project is to provide the unified access by means of ALSA plugin. Adding ALSA plugins would make it easier for Bela to be written in more programming languages and for other audio softwares that use libasound on linux.

The project will also focus on writing all necessary components for interfacing with this plugin, such as an exemplary userspace application and instructions on how to use the ALSA API with BELA.

Implementation

ALSA plugin

ALSA plugins are used to create virtual devices that can be used like normal hardware devices but cause extra processing of the sound stream to take place. They are used while configuring ALSA in the . asoundrc file. PCM plugins extends functionality and features of PCM devices. Programs that use the PCM interface generally follow this pseudo-code: To display PCM types, features and setup parameters and or to read from any PCM device and write to standard output , these code snippets are pretty helpful.


Why ALSA Plugins?

ALSA consists of these 3 components:

1.A set of kernel drivers. — These drivers are responsible for handling the physical sound hardware from within the Linux kernel, and have been the standard sound implementation in Linux since kernel version 2.5

2.A kernel level API for manipulating the ALSA devices.

3.A user-space C library for simplified access to the sound hardware from userspace applications. This library is called libasound and is required by all ALSA capable applications.

Plugins are used to create virtual devices that can be used like normal hardware devices but cause extra processing of the sound stream to take place. Virtual devices are defined in the .asoundrc file in your home directory.

1 pcm.SOMENAME {
2     type PLUGINTYPE
3     slave {
4         pcm SLAVENAME
5     }
6 }

This creates a new virtual device with name SOMENAME of type PLUGINTYPE that pipes its output to some other virtual or hardware device SLAVENAME. SOMENAME can be any simple name. It's the name you'll use to refer to this device in the future. There are several virtual device names that are predefined, such as default and dmix. PLUGINTYPE is one of the names listed in the official documentation above. Examples are dmix (a plugin type as well as a predefined virtual device), jack, and linear. SLAVENAME is the name of another virtual device or a string describing a hardware device. To specify the first device of the first card use "hw:0,0" (with the quotes).

To add Bela support to - more programming languages ,the cross-platform user-space libraries like RtAudio, portaudio, jack connect to ALSA drivers.

ALSA Drivers API

Will be using some of the functions mentioned here to read/write, open/close and interact with Bela.

Bela has a simple API of three functions: setup() , render() , and cleanup() The setup() function to initialise hardware, allocate memory, and set up any other resources you will need in render(). The render() function is where all of Bela’s real-time processing takes place. The cleanup() function to do tasks like freeing up any memory that was allocated in setup().

Outline for writing ALSA plugin

Bela core cannot call into the Linux kernel , so there has to be a custom alsa plugin which in user space lets interaction with the Bela. So to write a plugin on ALSA , I'll follow these basic steps first:

  • create probe callback.
  • create remove callback.
  • create a struct bela_driver structure containing the callbacks.
  • create an init function just calling the Bela_initAudio() to initialise the rendering system.
  • create an exit function to call the cleanup() function
  • PCM interface :

The PCM middle layer of ALSA is quite powerful and it is only necessary for each driver to implement the low-level functions to access its hardware. Including these libraries <sound/pcm.h> <sound/pcm_params.h> to access some functions related with hw_param. A pcm instance is allocated by the snd_pcm_new() function. After the pcm is created, need to set operators :

 1 static struct snd_pcm_ops snd_mychip_playback_ops = {
 2         .open =        snd_mychip_pcm_open,
 3         .close =       snd_mychip_pcm_close,
 4         .ioctl =       snd_pcm_lib_ioctl,
 5         .hw_params =   snd_mychip_pcm_hw_params,
 6         .hw_free =     snd_mychip_pcm_hw_free,
 7         .prepare =     snd_mychip_pcm_prepare,
 8         .trigger =     snd_mychip_pcm_trigger,
 9         .pointer =     snd_mychip_pcm_pointer,
10 };

After setting the operators , call

1 snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_DEV,
2                                       snd_dma_pci_data(chip->pci),
3                                       64*1024, 64*1024);

to pre-allocate the buffer. When the PCM substream is opened, a PCM runtime instance is allocated and assigned to the substream. This pointer is accessible via substream->runtime. This runtime pointer holds most information you need to control the PCM: the copy of hw_params and sw_params configurations, the buffer pointers, mmap records, spinlocks, etc.


  • get callback

This callback is used to read the current value of the control and to return to user-space. For example :

1 static int snd_myctl_get(struct snd_kcontrol *kcontrol,
2                          struct snd_ctl_elem_value *ucontrol)
3 {
4         struct mychip *chip = snd_kcontrol_chip(kcontrol);
5         ucontrol->value.integer.value[0] = get_some_value(chip);
6         return 0;
7 }
  • put callback

This callback is used to write a value from user-space.

 1 static int snd_myctl_put(struct snd_kcontrol *kcontrol,
 2                          struct snd_ctl_elem_value *ucontrol)
 3 {
 4         struct mychip *chip = snd_kcontrol_chip(kcontrol);
 5         int changed = 0;
 6         if (chip->current_value !=
 7              ucontrol->value.integer.value[0]) {
 8                 change_current_value(chip,
 9                             ucontrol->value.integer.value[0]);
10                 changed = 1;
11         }
12         return changed;
13 }

return 1 if the value is changed, else return 0. If any fatal error happens, return a negative error code as usual.

  • When the application calls snd_pcm_open/snd_pcm_readi/writei ; the pcm data is called in a thread created when the user space driver was initialized.
1 Application --> ALSA --> Thread (user space)

As ALSA only allows a plugin to work with snd_pcm_readi() and snd_pcm_writei)) calls and not callback-driven callback then the read()/write() functions are to be called in with a wrapping code.

User space

To call read() and write() functions without the system calls , there has to be a generic user space alsa device , because Bela Core cannot call into Linux kernel. To write a user space driver , basic steps are : open the UIO device (so it is ready to use), get size of memory region, map the device registers, unmap, then write the functions needed (select() , read(), write() , mmap()).

.

To connect to the driver, Xenomai with Cobalt core has basic device I/O such as open() and close(), ioctl() wrapped under a more user friendly API for developers in user space. So , will make use of the ioctl() to call read() [#define "ioctl name" __IOR] and write() [#define "ioctl name" __IOW] .As ioctl() is useful for implementing a device driver to set the configuration on the device. Following are the steps involved to use IOCTL :

  • Create IOCTL command in driver
  • Write IOCTL function in the driver
  • Create IOCTL command in a Userspace application
  • Use the IOCTL system call in a Userspace
Bela Cape and ALSA interface

At present , there are separate audio drivers for Supercollider and Csound,to call into Bela’s C API. Referring to these , I'd have an idea of how to write the ALSA plugins. BELA system operates and utilizes the ARM CPU and the PRU unit. The PRU shuttles data between the hardware and a memory buffer; the Xenomai audio task then processes data. This data interupts ARM. Bela is based on Linux with the Xenomai realtime kernel extensions.

Bela description.jpeg

The programs will interface with the virtual ALSA devices created by means of plugins and still utilize the Xenomai threads for data transfers. To test, I'll first run few examples of ALSA plugins using ALSA API. The API provides initialisation and cleanup functions in which the programmer can allocate and free resources. Using the following commands in linux to play the audio and play the text as audio :

1 aplay filename
2 spd-say "Your text"

Timeline

Date Milestone Action Items
Feb - 03rd April '2022 Pre-work
  • Getting familiar with BeagleBoard environment.
  • Build and run the BelaImage.
  • Understanding the past projects.
  • Exploring ALSA plugins.
04th April '2022-19th April '2022 Proposal Submission
  • Getting feedback regarding the proposal
20th May '2022-12th June '2022 Community Bonding
  • Getting familiar with the entire code base
  • Looking for some issues to solve related to this project.
  • Trying out the existing examples on Bela Cape
13th June '2022 Milestone #1
  • Introductory video
  • Setting up the environment (Bela + Xenomai)
  • Write a custom device tree overlay
20th June '2022 Milestone #2
  • Starting with the ALSA plugins
  • Testing the existing ALSA plugins
27th June '2022 Milestone #3
  • Trying out the basic existing ALSA plugins on Bela
  • Start writing the custom plugins for BELA(1/2)
04th July '2022 Milestone #4
  • Start writing the custom plugins for BELA(2/2)
  • Test basic input/output programs with BELA
11th July '2022 Milestone #5
  • Configuring ALSA plugins for BELA(1/2)
18th July Milestone #6
  • Configuring ALSA plugins for BELA(2/2)
  • Test the plugins
  • Documenting into a blog
25th July '2022 Milestone #7
  • Write the functions via ALSA API
  • Test the functions
  • Write the ioctl() functions to call read() & write()
01st August '2022 Milestone #8
  • Complete adding the ALSA plugins
  • Testing the Bela interface
  • Writing a Blog for ALSA API vs Bela's API
08th August '2022 Milestone #9
  • Connect either one of the audio libraries (Rtaudio, portaudio, jack) to the drivers.
  • Test these audio I/O libraries
15th August '2022 Milestone #10
  • Communicating with the mentor for the required changes and improvements to be made
  • Implementing the suggestions
22nd August '2022 Milestone #11
  • Complete pending tasks
  • Start documentation and preparations for the final video
  • Feedback from mentors
29th August '2022 Milestone #12
  • Mentor Evaluation after submission of work
  • Complete YouTube video
05th September '2022 Milestone #13
  • GSoC completion

Experience and approach

  • I have done few projects in embedded systems with ESP32 and freeRTOS. In my last project, me and my team designed a PCB for a maze solving bot, used BFS algorithm and dead-end replacement rules. I have decent experience in C, C++ and python.
  • I am quite acquainted with Computer Vision and Web development.
  • The project idea requires good knowledge of the build systems as well, I'd do my best in contributing towards this project and learn from the journey.

Contingency

Inspite of less resources available for libasound (low-level ALSA user space library), would keep on trying to fetch and solve the errors over the internet. If at all I fail in between , or the project is not heading in a positive direction, would get in touch with the mentors on the IRC channels and try some other approaches.

Benefit

In order to make the process of adding Bela support to more programming languages in the future easier, we could think of simply adding Bela support to some common audio backend libraries, so that a single effort can be reused across several pieces of software that use that same library. Upon successful completion, the project will make it easier to add more applications and programming languages to Bela.

 The purpose of this project is to allow Bela to show up as a device that libasound can interact with, so that one does not need to 
 adapt existing programs in order to run them on Bela (though some programs may still require additional changes).

~ Giulio Moro

Resources

Misc

Completed all the requirements listed on the ideas page and submitted the cross compilation task through the pull request #166.