Home of the jammr Community
Offline
Hi LiveIfOnEcho,
Unfortunately browser audio support is not quite there yet. It does not support ASIO on Windows and some soundcards need that for low latency audio. Also, it does not support VST/AudioUnit plugins, which some users rely on for their sounds (playing a MIDI controller with a piano VSTi plugin, for example).
While it's possible to implement a web client, software monitoring may not be usable due to high latency. Software monitoring is recommend (the jammr setting is called “Play back my audio”) because you hear what others hear. USB soundcards usually have hardware monitoring (also called “direct monitoring”) so that can be used instead but the volume level is independent of the volume level that other users hear from you, making it hard to use.
Hopefully the situation will improve as web browser and audio APIs evolve. But it takes time since it needs to work well in Safari, Firefox, Chrome, and Edge on Windows, Mac, and Linux before jammr can rely on it as a solution for all users.
Offline
stefanha,
thinking of the scenario of porting elsewhere referenced NINJAM client based on javascript, etc to android's webview coding strucures, and having found this:
https://developer.android.com/ndk/guides/audio/audio-latency#best-practices
. . . i wonder if the referenced minimum latency standard of categorically 20 ms or less is sufficient to integrate successfully in a NINJAM style jam session, how do you think?
If there's some plug and chug modular code swapout that can be done with reasonable amounts of code rephrasing, i'd be interested to know about it, to enable jamming from a personal computer here with NINJAM server and outdated windows client installed, and a mobile phone overseas with a duct taped .apk file wired over to it. with the above referenced latency standard for some android devices, is it a doable type of scenario, do you think? how about with the longer 45 ms standard which seems more universally established across devices? is that just too laggy, already?
curious if it's worth my looking at in detail. what do you say?
ps the link above claims musicians need 10 ms latency at most, but stops short of saying that devices just don't offer this. do you know if there's any standardized way to look up mobile devices by their achievable audio latency? this could prove useful to check beyond androids low precision latency flags as reported. can you point me in any narrower direction than standard web search engines are likely to turn up?
Offline
My own personal threshold is around 15 milliseconds. Above that I find it's necessary to consciously adjust my playing too much and it's a different experience. USB sound cards usually achieve 5-12 milliseconds latency without special configuration. Which value you find acceptable is subjective but you can try searching for research studies that tested which latency musicians found acceptable.
I'm not aware of a comprehensive way of finding out mobile device audio latencies. Android is notorious for poor audio latency. iOS has historically had better audio latency. Keep in mind the hardware varies (e.g. iRig HD2) so it may not be possible to give a more specific number than tha 20ms that the Android NDK docs provide.
In this situation you could consider relying on hardware monitoring instead of software monitoring. Most USB sound cards support hardware monitoring (often with a blend knob that controls how much of the input signal is directly sent to the output in hardware). Then there is no latency problem for the user. However it creates problems I mentioned previously:
1. The user's hardware monitoring volume level is independent of the recording volume level. The software will need to aid users in getting a good mix, because they all hear themselves differently.
2. Software instruments and effects cannot be used. This rules out MIDI synthesizers/samplers and apps like amp simulators.
3. Software latency compensation may be necessary to ensure that remote users hear the audio without latency.
In my opinion hardware monitoring is not a general solution. Too many users will be affected by its shortcomings. However, it might be fine in your case and would solve the latency problem.
Offline
Offline
so, wondering if jamming would be possible using a couple of distributed pixel 3a phones running ninjam-js ported to android webview objects in a makeshift .apk, and a windows laptop without asio4all having access to the laptop's speakers due to inability to disable MS GS WAVETABLE SYNTH, and running NINJAM server as such over home wifi. looks like this is the status of this project here . . .
Edited LiveIfOnEcho (March 9, 2021 00:13:22)
Offline
If you can choose to support limited hardware then it should be doable, especially if you are willing to replace the audio code in ninjam-js with custom code that uses the Android NDK.
Offline