generally speaking it seems to me that development of drivers for OS4 is not supported / documented / encouraged at all (correct me if I am wrong).
My current focus is on how to develope an AHI audio driver from the ground up.
First of all I would like to understand how / when the driver is loaded and executed and how the work flow of a generic one looks like.
Every contribution is more than welcome as usual thanks ;)
I have found most of AHI is written using some includes that aren't quite standard. I recall "geek gadgets" was one, but I think not the only special requirement.
I am no compiler expert, but I have been unable to compile AHI or most of the drivers using a native AmigaOS compiler install.
The X1000 HDAudio driver is an exception, as it was built completely using the standard compiler install. It CAN be done.
If anyone can help me solve the compiler setup for AHI itself, I have some changes I'd like to commit.
A good starting point for OS4 native drivers might be the "filesave" driver code. It's not tied to any specific hardware, it really shows only how to get sound OUT of AHI, without any details that will be sound card specific.
Further details: Berlios no longer hosts the AHI code, but it's up on SourceForge: http://sourceforge.net/projects/arp2/files/ahi-berlios-ftp/
There's lots of stuff there, both for AHI developers and for programmers who want to use AHI for sound output.
I've been quite busy with the new job, but I will try to keep an eye on this thread occasionally, as time permits.
Thanks for your feedback.
Good idea, I need to see the smallest driver first and how to get the sounds sent to it out of AHI.
For the sake of simplicity, I found the "Void" driver example code even better than "Filesave", it really looks like a skeleton / template, infact the driver itself does nothing simply discards audio data sent to it (and it doesn' t support recording).
This should also help writing a proper Makefile for OS4.
Study ongoing ;)
Well since AHI audio drivers are basically library-based audio drivers, I feel quite confident to be able to compile, despite some drawbacks (source code for M68K AmigaOS with a lot of flags for multi architecture support).
Maybe I can help from the compiler side and you provide more info about a generic driver work flow? Yes? :)
There's a few things about AHI drivers that are a bit different than most other projects.
I'll only touch on a few of the points, as a complete discussion would take too long. If you or anyone has specific questions, this thread would be a great place to post them. Also, if anyone wishes to correct anything that I post incorrectly, you are invited and welcomed to do so.
AHI drivers look very similar to the library/device code model that is common in OS4. But the similarities don't run very deep. Under the hood there are some very specific differences.
Most devices are acessed using Exec IO Requests. This is NOT the case for AHI drivers. Yes, there is an Exec-style AHI interface, but that's not where the driver code goes.
When AHI is started, it reads the DEVS:Audiomodes directory, where you usually have the modefiles for each driver. These modefiles are IFF blocks that describe the basic capabilities of the device they support.. Much of what you see in the AHI Prefs window is read from these files. A side note for developers: We have a command now that will read a text file and generate modefiles easily. It can be easily added to the project makefile so that a modefile is created with each make. Lots of the existing modefiles appear to be hacked with a hex editor. It's nice to have a more proper means available.
Of note to users: You can prevent AHI from mounting a device simply by moving the modefile out of devs:Audiomodes.
Back to the startup flow. For each modefile found, AHI loads and opens the driver. At this point the driver should go out and look for whatever device it supports. It IS possible to find multiple devices of the same type. If no device is found, the driver exits on return to AHI.
If AHI prefs is opened, AHI will ask the driver for all the specifics, how many inputs, what are they named, same with outputs, all that other stuff. Actually a fair amount of the driver code is just to support all the queries from the driver.
There's also a query model for card features: There's a small list of features that _MIGHT_ be supported by the audio device, and might not. So when asked the driver can either provide that feature or defer it back to AHI for a software implementation. Reverb is handled this way.
When a program wants to make sound, it opens AHI. It might specify the device using the ModeID, or it (usually) just opens the default. The caller asks for whatever bit width, number of channels, and sample rate that it wants, at this point the driver will return whatever it can offer as closely to the request as possible. It is VERY IMPORTANT for the caller to check the returned values to see what it really got.
You can't always get what you want. But if you try sometimes you just might find, you get what you need.
Note: Multiple programs can use AHI at the same time. Each one gets it's own controls, including volume. AHI will mix it all down for the final output.
Once the program has set up it's volume, pan, and other settings, it calls "Start()", and the output begins.. From the driver end, the start command tells the driver to begin shoveling audio buffers to the hardware.
But here's another change from the usual.. The sound device is expected to create a hard interrupt every time it needs more data. So it has to go through the process of setting it up and making all that stuff right. The program usually does a typical double-buffered routine. I'll write a short loop of pseudo-code to demonstrate:
Call AHI to fill Buffer A with audio data.
Call AHI to fill Buffer B with audio data.
Set hardware to create an interrupt at the completion of each buffer.
Post buffer A to hardware.
Post buffer B to hardware.
Elsewhere, the interrupt code looks something like this.
Figure out which buffer just completed.
Call AHI to fill that buffer with new audio.
Post that buffer to play immediately after the one that's playing now.
There may be similar code for recording, which of course copies data
FROM the sound card instead of TO the sound card.
Code that is called from an interrupt must meet certain restrictions. I'll avoid that topic completely. ;)
When AHI issues a STOP command, your program sets a flag to prevent
queueing any more sound buffers.
In a nutshell, that's the process.
There are a few things that are affected by this model:
The "Modefile" usage assumes that you know all the available options when you make the driver. This will NOT be the case for USB devices. Each device may have different capabilities.
AHI mounts everything it can find at startup. This assumes that you can not add or remove anything after you have started. Again, this is not ideal for USB audio devices.
AHI does not offer much for interconnection between programs. It's really just suited for sound card drivers. (camd has spoiled me that way. Anything connects to anything)
AHI _DOES_ offer individual volume controls for each client. But many programs ignore the volume control and expect the user to manage it with "mixer".
"Mixer" has no interface with AHI at all. It is expected to "bang the hardware" directly to control the volume. This creates multiple issues.
Adding a sound card that is driven from device code (like a USB sound card) is tricky because USB does not create interrupts when it needs more audio data.
I'll stop there. Enough typing for tonight.
I will try to keep an eye on this forum for questions or comments.
Thanks for your long post.
Moving to source code, a generic AHI audio driver named "xxx" looks like the following set of files and minimal functions to be implemented:
4) xxx-playslave.c --> Playing = get sound out of AHI and send it to the sound card
5) xxx-recslave.c --> Recording = get audio data from the sound card
6) xxx-accel.c ( ? )
I hope this can be of help to the talk.
Not strictly true. You can add modes at runtime using addaudiomodes which in turn uses
ULONG APICALL (*AHI_AddAudioMode)(struct AHIIFace *Self, struct TagItem * AHIPrivate);
ULONG APICALL (*AHI_RemoveAudioMode)(struct AHIIFace *Self, ULONG AHIPrivate);
ULONG APICALL (*AHI_LoadModeFile)(struct AHIIFace *Self, STRPTR AHIPrivate)
So some kind of dynamic driver such as a USB device would use those.
I stand corrected, Thank You.
It may also be possible to generate device-specific modefiles "on the fly" for usb devices, then use those as targets for the AddAudioModes call. it might get a bit tricky for multiple modefiles to reference the same driver code base, but it would be worth trying anyway.
Specifically calling AHI_RemoveAudioMode() could be useful in case someone unplugs a USB based device too.
After working on the USB audio driver I met a few issues that must be handled differently. I guess I let my frustration show in my previous post.
Thanks for keeping me straight!
Just a minor update to say I found the "Filesave" example very useful indeed.
Playing and recording (AHI level) are there. Playing really shows how to get sound out of AHI. Recording (audio data from the sound card) is simulated as an inputed file.
For the moment I got many answers to my questions, thanks.
Can I ask which is this mentioned command? Thanks.
It feels like I got to a narrow point and I am stuck now.
My idea is now to develope an OS4 native AHI audio driver for Toccata.
(Win-FS)UAE has now support for Toccata emulation, so I guess it should be possible to output audio through OS 4.1 Classic emulated.
At least I can focus on a specific hardware (emulated).
What do you guys think about it?
Does documentation for Toccata exist? Any help?
The command for generating mode files is MakeMode/MakeAudioMode. It was renamed from MakeMode to MakeAudioMode in a later revision but the currently latest public SDK still has it with the old name.
I never knew it made public release. :)
I had sent it over privately after he asked.
It's a very simple tool, but given the number of AHI drivers that have modefiles that look like they were made with a hex editor, I think it's a useful addition, especially when it is managed directly by the makefile.
I didn' t spread your tool as you requested me to do.
I actually only asked here about it and appreciated you then sent it to me, still not even used it.
The last test verion of a SDK release I have doesn't appear to have it. (.21)
It's under both names in my beta SDK. (probably amiupdate didn't remove the old version)
The majority I've seen were made from assembler src code :-)
They are essentially arrays of taglists. Which is why you can side step the modefile for a dynamically added audio device like a hypethetical USB one... just build your tags list at runtime.
Sounds like fun:
Question 1 does the 68k tocata driver not work?
Question 2 did you look at the toccata src in the main ahi 6.0 dev archive? There is some assember in it but should privde a src of documention at least.
Latest public SDK version is 53.24 so your version is old and outdated.
I am doing this just for the learning purpose.
It should work, not tested it myself.
Yes I did, thanks.
The main problem now is that I couldn' t find the following .h files:
This thread feels like a bit stuck, a revival is needed, the more contributions the better.
A question: does documentation about Toccata sound card exist?
It appears that you can find those includes in the following archive..
Oddly enough I could not download this file from my primary aminet site. No idea why and I'm not much for figuring it out right now. The link I provided works for me.
Interesting finding, thanks.
I am now doubtful about what I am doing.
The Toccata M68K AHI audio driver source code (from AHI 6.0 package) makes use of a "mysterious" toccata.library V12 or greater, so another dependency and obstacle for the poor programmer who wants to learn :)
I realised that not having a specific sound hardware in mind, really makes me going nowhere.
Basically I learnt how to program a "Filesave" like driver, plus I collected various other infos here and there, I like your articles on Hyperion Blog a lot.
I would like to understand more about HDAudio, codecs, S/PDIF optical output, ...
Will see if in next future this can be possible.
I'm glad you like some of my articles.
It does sound a bit like your seeking very specific answers to very general questions.
The Filesave driver is great because it shows you everything you need to get audio OUT of AHI, without any of the card specific code for actually playing the sound.
HDAudio is an interesting future. It's the replacement for AC97, and it is supposed to be a single driver that works for all devices. In truth that is a lot more complicated than it sounds. HDAudio is a protocol for the driver to query the card about its capabilities, but that's only going to be universal if the driver can ask all the right questions, understand all the possible responses, and parse all of that into whatever the end user really wants at the moment.
"Codecs", coders/decoders, are just the hardware/software act of converting audio into a stream of data and back. At the level of AHI, everything I am familiar with is just PCM. Pulse Coded Modulation. Fancy words for uncompressed sampled audio.
S/PDIF is an optical or coax connector that can be used for audio. It can support many codecs, but we only support PCM Stereo. Apparently if you want lots of channels, like 7.1 through S/PDIF, you have to have some form of data compression. We're not there yet.
But "plain" PCM is very common.
One very nice side-effect of optical S/PDIF is the complete electrical isolation between the computer (which typically has a noisy digital ground) and the audio system (which ideally has a very quiet analog ground). So once the optical cable is connected, sound quality is more dependent on your stereo receiver, and less affected by your computer system, within reason.
I once designed a MIDI controlled analog audio mixer, and the greatest challenge was to power both the digital and analog circuits with a common ground but as little noise as possible. If I were to try that again today, I'd probably go all digital instead.
Yep, it is my attempt to figure out how these things work.
I guess it is the X1000, or?
Thanks for your reply, you always provide useful infos between the lines.
The X1000 has 7.1 audio through the analog jacks. It also supports stereo through the optical S/PDIF output.
It does NOT support 7.1 channels over the S/PDIF at this time (and it's not planned as far as I know).
It supports sampling/playback frequencies as high as 192Khz, and because it uses audio specific DMA channels, it does so with very little overhead.
There have been certain frequencies or combinations removed in the interest of simplicity. Some rates were available on the analog, but not on the digital interfaces. We decided it would be easiest to remove them to prevent confusion. We later learned that S/PDIF does not have a standard set of baseline capabilities. every device can support or ignore whatever it cares to. So there are some settings that won't work with specific brands of audio gear. With S/PDIF you really need to compare all the specifications for both sides of each link if you want to be sure some specific combination will work.
I actually had to buy an optical to analog adapter so I could develop and test the interface on the X1000. I'm still learning a lot of this myself, and I reserve the right to be wrong. :)
Really cool tech audio specs for the X1000, both hardware and software.
How 7.1 audio works within the AHI audio driver? You deliver data to 8 channels?
Yes. The AHI_AllocAudio() call specifies AHIA_Sounds of 8. If that AllocAudio is granted, then the AHISampleInfo is built up with the type AHIST_L7_1, using 8 * 32 bit samples per frame.
This feature was made available with version 6 of AHI. It works only with 32 bit samples, or 24 bit samples in 32 bit containers. Also, we can play back 8 channels at once, but we can record no more than Stereo.
The order of the channels for 7.1 is:
I wonder how many games make use of 7.1 for better positioning of audio cues?
If I understand well, a generic application client of the driver is in charge of preparing packed 32 bit 7.1 sounds (AHIST_L7_1) to be able to play 7.1?
And how AHIA_Channels should be set?
I don' t know about games, I once programmed a ProTracker MOD player for OS3, does it make sense to redesign such music players to play back 7.1?
I guess the S/PDIF bandwidth can' t handle more than stereo, or?
I see that other systems (competitors) went 7.1 over HDMI instead.
Yes games for PlayStation 4, but that is not Amiga related, sorry for the off topic ;)