Introducing Olympus & Olympus I/O - A new perspective on modern music playback

Taiko-Olympus-big-advert.png

For those who just started reading up on Olympus, Olympus I/O, and XDMI, please note that all information in this thread has been summarized in a single PDF document that can be downloaded from the Taiko Website.

https://taikoaudio.com/taiko-2020/taiko-audio-downloads

The document is frequently updated.

Scroll down to the 'XDMI, Olympus Music Server, Olympus I/O' section and click 'XDMI, Olympus, Olympus I/O Product Introduction & FAQ' to download the latest version.

Good morning WBF!​


We are introducing the culmination of close to 4 years of research and development. As a bona fide IT/tech nerd with a passion for music, I have always been intrigued by the potential of leveraging the most modern of technologies in order to create a better music playback experience. This, amongst others, led to the creation of our popular, perhaps even revolutionary, Extreme music server 5 years ago, which we have been steadily improving and updating with new technologies throughout its life cycle. Today I feel we can safely claim it's holding its ground against the onslaught of new server releases from other companies, and we are committed to keep improving it for years to come.

We are introducing a new server model called the Olympus. Hierarchically, it positions itself above the Extreme. It does provide quite a different music experience than the Extreme, or any other server I've heard, for that matter. Conventional audiophile descriptions such as sound staging, dynamics, color palette, etc, fall short to describe this difference. It does not sound digital or analog, I would be inclined to describe it as coming closer to the intended (or unintended) performance of the recording engineer.

Committed to keeping the Extreme as current as possible, we are introducing a second product called the Olympus I/O. This is an external upgrade to the Extreme containing a significant part of the Olympus technology, allowing it to come near, though not entirely at, Olympus performance levels. The Olympus I/O can even be added to the Olympus itself to elevate its performance even further, though not as dramatic an uplift as adding it to the Extreme. Consider it the proverbial "cherry on top".
 
Last edited by a moderator:
Wild to spend literally weeks researching and sharing findings (and collectively many person years for folks posting and sharing here) to prompt an AI to generate a 30 minute podcast.

Whole point is to share information and insights to help others, so I guess that’s ok?

Thankfully the AI presented an excellent summary in a very accessible way. It would be so easy to nudge it in a different direction though
 
Wild to spend literally weeks researching and sharing findings (and collectively many person years for folks posting and sharing here) to prompt an AI to generate a 30 minute podcast.

Whole point is to share information and insights to help others, so I guess that’s ok?

Thankfully the AI presented an excellent summary in a very accessible way. It would be so easy to nudge it in a different direction though


I’ve just listened to the 32 minutes "AI Duet " presentation of the Olympus.

It’s truly impressive and, let’s be honest, rather pleasant to listen to.

It’s a great way to get an idea of what the Olympus-XDMI is all about.

However, I did spot a small mistake: there are no moving parts in the Olympus. It could be confusing. This is mentioned 2–3 times (for example, at 21:42).

It’s a shame that the Olympus I/O wasn’t mentioned.

Including it would have provided a complete and accessible overview of the Taiko ecosystem.
 
The information is good but it reminds me of reading a great book and a bad movie version is made. Curious how they come up with what they perceive to be the best voices to deliver the information. Clearly they missed the mark on this one...They could have used Ron...
 
Last edited:
I posted this AI generated “natural” podcast about the Taiko Olympus, which is based on a meta analysis of curated data… feel free to leave your feedback. I hope you’ll find it surprising, entertaining and informative.


Could you clarify your role in the creation of this "podcast?"

While the tone and content is impressive, what really bothered me was the lack of attribution. The AI has clearly drawn deeply from @ray-dude's review, and the comments on this thread of WBF.

Yet the AI uses vague phrases like "our sources indicate” and “The research suggests” without attribution. This is really troubling.
 
  • Like
Reactions: DW101 and nonesup
Hi @austinpop - those are very valid questions. My primary role in the creation of the podcast is to choose the subject, and curate the relevant data sources to be used by the generative AI. Data sources and links used have been included in the “Sources and Credits” section of the video’s “Description” section. The general intent is to provide some kind of value to the audiophile community.

Once the data sources are provided, the AI entirely generates the content from these data sources, through some kind of proprietary meta-analysis and aggregation, for which the actual process, script used for speech generation, or output cannot be natively accessed to or modified. I give general direction to the generative AI e.g., ask for emphasis of matters, or choice of tone, which is typically a pretty iterative process. I also perform general quality control of the output.

It does appear the language models use carefully chosen language. If the phrases, discussions or references actually point to a specific individual in the data sources (something I wasn’t aware of), I will of course add this individual to the credits, with proper consent. @ray-dude
 
Last edited:
Hi @austinpop - those are very valid questions. My primary role in the creation of the podcast is to choose the subject, and curate the relevant data sources to be used by the generative AI. Data sources and links used have been included in the “Sources and Credits” section of the video’s “Description” section. The general intent is to provide some kind of value to the audiophile community.

Once the data sources are provided, the AI entirely generates the content from these data sources, through some kind of proprietary meta-analysis and aggregation, for which the actual process, script used for speech generation, or output cannot be natively accessed to or modified. I give general direction to the generative AI e.g., ask for emphasis of matters, or choice of tone for example, which is typically a pretty iterative process. I also perform general quality control of the output.

It does appear the language models use carefully chosen language. If the phrases, discussions or references actually point to a specific individual in the data sources (something I wasn’t aware of), I will of course add this individual to the credits, with proper consent. @ray-dude
Hello Hifickips,

I find the process fascinating. I’ve used AI in only minimal ways. The first time I used it (for an audio query of all things!) I found the response to be well below an expert human level. I then tried other model versions and was impressed with the responses to other more technical queries.

You can quibble with anything, but I was quite impressed with the results you obtained with what appears to be a significantly less effort than a traditional podcast production would have required.

It was informative, and presented the subject in a way that was easy to understand. And frankly, it was better than a number of other audio podcasts out there.

Thank you for posting it.

What AI software did you use? And how long did it take you to produce the result you posted?

Thanks
 
Last edited:
It is fascinating, what would be more fascinating is if it presented something that we didn't already know...
Hi John T,

When I used the qualifier fascinating, I meant the process of using AI for creating the podcast. Not necessarily the content.

Clearly, owners of the Olympus and participants in this thread are already very familiar with the Olympus and its capabilities. And agree with you that the podcast does not add significantly to a subject many of us have been following intently for some time.

But our Taiko world is relatively small. And while it may not be the purpose of the video, the podcast can possibly serve to increase awareness to others who know very little about XDMI and the significant advances that the Taiko team has accomplished.
 
Last edited:
Hi John T,

When I used the qualifier fascinating, I meant the process of using AI for creating the podcast. Not necessarily the content.

Clearly, owners of the Olympus and participants in this thread are already very familiar with the Olympus and its capabilities. And agree with you that the podcast does not add significantly to a subject many of us have been following intently for some time.

But our Taiko world is relatively small. And while it may not be the purpose of the video, the podcast can serve to increase awareness to others who know very little about XDMI and the significant advances that the Taiko team has accomplished.
I wasn't attempting to sound like a smart ass, I completely agree with you regarding the creation aspect, genuinely fascinating. It was helpful. I seriously think/believe AI will advance to the point of elaborating much deeper (kind of scary) than what it is fed. That was what I was alluding to...
 
  • Like
Reactions: cmarin
I would be interested in how this YouTube video was created because this is truly scary real
Your local AI expert chimes in here to say this is a Google product called Notebook LLM. It’s quite impressive but watch out for gotchas as it can make stuff up (these are called hallucinations in the world of LLMs — large language models). You can feed it any document and it creates this chatty duet. If I was still teaching, I would use it to enliven my lecture notes!
 
Your local AI expert chimes in here to say this is a Google product called Notebook LLM. It’s quite impressive but watch out for gotchas as it can make stuff up (these are called hallucinations in the world of LLMs — large language models). You can feed it any document and it creates this chatty duet. If I was still teaching, I would use it to enliven my lecture notes!
That's too wild, "hallucinations" in the world of Notebook LLMs regarding making stuff up! WOW! I take it the chatty duet is the standard way it presents itself in a podcast format?
 
Last edited:

About us

  • What’s Best Forum is THE forum for high end audio, product reviews, advice and sharing experiences on the best of everything else. This is THE place where audiophiles and audio companies discuss vintage, contemporary and new audio products, music servers, music streamers, computer audio, digital-to-analog converters, turntables, phono stages, cartridges, reel-to-reel tape machines, speakers, headphones and tube and solid-state amplification. Founded in 2010 What’s Best Forum invites intelligent and courteous people of all interests and backgrounds to describe and discuss the best of everything. From beginners to life-long hobbyists to industry professionals, we enjoy learning about new things and meeting new people, and participating in spirited debates.

Quick Navigation

User Menu

Steve Williams
Site Founder | Site Owner | Administrator
Ron Resnick
Site Owner | Administrator
Julian (The Fixer)
Website Build | Marketing Managersing