Since my post which started this thread I have run into multi-user issues as well as issues when using XFCE which were not there in the previous AV Linux release. This was on our Netflix laptop, but now I have to install the latest on my work laptop. Getting Wi-Fi to work at all required installing NetworkManager and still ends up with
many instances of the applet in the XFCE task bar.
I am happy to assist in sorting these out as much as I am able, given my use case, which is explained below.
This is my dilemma: our hardware is fine to run all of this for years longer, but we cannot afford to upgrade it to what Microsoft requires for Windows 11.
What drew me to AV Linux originally are the following points:
- That the integrations are all done, which is an enormous amount of work (thank you, Glen!).
- The focus on performance, as that impacts live work, of which I do a lot (see below).
- That it used (now sadly past tense) XFCE by default, which can easily be tweaked to forego eye candy, so as not to impact live use.
My plan was to run it for some time as my daily driver on my laptop before I would go to the next step — full live use, covering the following:
- Running our DMX lighting controller using QLC+.
- Running Worshipsong Band under Bottles) to control lyrics display in sync with bi-directional multi-track audio (simultaneous playback & recording) using Ardour and connecting to our sound desk. This allows individual multi-track stems to be mixed by our engineer live, using the desk's physical control surface instead of a mouse on a screen. The flow of a song is controlled from the stage by the band leader over Wi-Fi, utilising a tablet, and all band members can see both what they're playing now as well as what is queued up. Click and cues also stay in sync as live changes to a song's geography are made.
- Running four separate monitor outputs, some of which are duplicated physically as well, i.e., up to four separate images going to lots of screens. Eventually I'd like to run them with Multicast over Wi-Fi, with a Raspberry Pi at each projector or screen.
- Running OBS Studio to handle the following:
- Live video and audio input from a camera via an external AV capture card.
- Present a virtual camera (with audio) to Zoom for live streaming.
- Applying pre-set scenes to any of the four screen outputs as well as to the virtual camera. With the pre-set scenes this enables things like picture in picture to any of the OBS Studio outputs. It also makes possible displays for those who don't have direct line-of-sight for events when we have to use over-flow seating or even outside projection for evening events: one scene would show lyrics, another only the speaker, another PowerPoint slides with the speaker small in the top right, etc.
For large events all of the above will be happening simultaneously.
In addition, the machine will have multiple users with different levels of access, most of whom will have any changes rolled back when they log out. Then their next session will present as the previous one did, including open windows and their screen positions, since the users have static roles they need to fulfil.
This is not a general purpose computer, but is an AV controller for critical live use. Enterprise level management will be exercised over the whole lot, given the potential for disaster when lack of knowledge is exercised.
I trust you can see that eye candy is the last thing I want, and the live performance hits imposed by Enlightenment's cool looks are a waste for me. What I need is raw performance of the programs that I am running
in the moment.
Live production doesn't have the luxury of multiple takes or of it being OK for rendering to take two minutes longer — everything has to work at the same time, all the time. Of these, lights and audio are the most crucial. A single dip in the lights or a glitch in the sound will be remembered, while most people won't even notice 10 dropped video frames per minute as long as the sound and lights are OK.