Interview

AITech Interview with Markus Schwarzer, Chief Executive Officer of Cyanite

In an interview with Markus, CEO of Cyanite, we explore the AI-powered music transformer model and its potential to transform the music industry. Learn more!

With AI technology transforming various industries, the music industry is no exception. Markus and his team at Cyanite have been working on a revolutionary AI-powered music transformer model that promises to disrupt the way we listen to and search for music and deliver the right music content regardless of the use case. In this interview, we dive deep into the fascinating world of AI in music, discussing everything from the technology behind Cyanite’s music transformer model to the impact it will have on the music industry.

So, without further ado dive right in!

Markus, can you tell us about your background and how you got involved in the music tech industry?

I played in bands pretty much throughout my life, trying to make it as a musician when still in high school. Since I was clearly not good enough I decided to go into the business side of things where I worked in labels and PR/promotion.

What inspired you to start Cyanite, and what problem does it aim to solve?

We saw big tech companies developing all these crazy recommendation algorithms for music consumers. But music selection in advertising and marketing was still rather archaic with people clicking through track after track in search of suitable ones. When we looked into the market we discovered that no tech was available to sufficiently solve that, so we built it ourselves.

Furthermore, there is a lot of uncertainty involved in music decisions when a lot of opinions on music taste are thrown around. So you rarely end up with the best possible option for your purpose and it takes ages to get there. We bring clarity and objectivity into that process by delivering easy-to-interpret data on the (among other things) emotional effect and brand fit of music.

How do you ensure that Cyanite’s AI models are unbiased and free from algorithmic biases?

That is a really important question for us. In fact, our Chief of Data Roman is driving the conversation internally on how to best mitigate biases both from the data as well as the creators (us). The algorithms are only as good as the input data, so we are trying to get data from around the globe and not just the West. The fact we have customers from Korea, Japan, China, Dubai, South Africa, Colombia, Kazakhstan, and Brazil speaks to this work and is a big success for us.

How does Cyanite partner with music industry stakeholders such as record labels, streaming services, and artists?

As mentioned, most recommendation algorithms were built for music consumption in B2C. They’re not necessarily great for B2B use cases as they’re not targeted enough and don’t deliver any data insights on the music like the emotional effect etc. Our algorithms are specifically tailored to the language of advertisers, marketeers, or music supervisors. When a music company/rights holder wants to make it easier for the demand side to find their music and make an informed music decision, they come to us. We run their entire catalog through our engine and make a kind of B2B Spotify out of it. Afterward, they can answer music briefs easier and back up their decision using our detailed data insights.

Can you describe the technical architecture of Cyanite’s AI-powered music analysis platform?

The core of our tech is artificial intelligence which is based on transformer models. You will know these models from other products like Midjourney, ChatGPT, or Dall-E. We had an initial training process where our AI was taught any given feature that can be a denominator for music and sound. From this layer, we derive all our insights into music. It’s highly flexible and can be tailored to a customer’s specific language or use case. We provide this service via an API, our own web app, or the AWS marketplace.

How do you ensure the quality and accuracy of the music metadata generated by your platform?

We have a meticulous quality assurance mechanism where we run the analysis results of new machine learning models through a variety of automated and manual testing including a qualitative survey of music supervisors.

Can you describe how your platform uses machine learning to analyze music and detect attributes such as mood, tempo, and instrumentation?

We mostly use technology from image recognition and annotation to retrieve information from audio. So, in the first step, we transform the audio into its visual representation, called a spectrogram. These spectrograms show information about the pitch and volume over the time progression of the song. In the training process, the AI recognizes and memorizes specific features and patterns in sequences of pitch and volume which are typical e.g. for certain moods. Whenever it sees a new track and recognizes the same features and patterns it saw in the training data, it will give a prediction of this specific mood being evoked by this song.

How do you handle the privacy and security concerns of your customers when processing their music data?

Every user has their own database which is ringfenced so that only they have access to it. For our customers who require a higher level of security, for example when an international top 100 band is releasing new music and we get the audio pre-release, we can obfuscate and encrypt the data so that it is still recognizable for the AI but inaudible and thus useless otherwise.

How do you balance the need for automation with the need for human oversight in your platform?

It depends on the use case. We have customers who fully trust the AI and release all our metadata on their platforms because it is perfect for their use case. Others quality check the tagging manually. But especially in audio branding, most agencies have their own methodology which is specific to their workflow. Those companies add their own metadata in their own understanding manually to the metadata we provide. E.g. which colors, celebrities, or clients they associate with the music. That is very individual to each company.

How does your platform handle music that is in a language it doesn’t recognize?

Our AI recognizes 113 different languages. Of course, that’s not every language, but a damn huge amount.

What is your approach to continuous integration and continuous deployment (CI/CD) in your software development process?

That is something very important to us. We work closely with our customers to improve the platform around their needs. Approximately every three months we add new features and functionalities to our system which is mostly based on customer feedback or wishes. We recently added over 20 new instrument classes, because it was requested by three of our customers and one company we are currently talking to.

How do you approach scalability and performance optimization in your platform?

Good question. One of the big topics at the moment is times when we onboard around 3 million songs every month. The switch from traditional music classification via CNNs to transformer models has already improved runtime by 10x but this is not where we want to stop. Our goal is to build a foremost robust system that can analyze songs in approximate real-time. Right now it takes us about three seconds per audio minute. Pretty close, but we want to bring it down even further.

How do you handle versioning and backward compatibility in your API?

Another great question. Whenever we release new machine learning models, we run all catalogs of our subscription customers through them free of charge. This is one of the greatest advantages of AI tagging because we can react to new market trends, changes in demographics, or mood responses quickly and ensure our customers’ metadata is always up-to-date. On the API side of things, we generally try to have as few breaking changes as possible and rather add new endpoints than change existing ones. If we change a classifier one-on-one we add the new one to the API and then grant the old one a grace period of 6 months which we work closely with our customers to make the switch as easy as possible.

Markus Schwarzer

Chief Executive Officer of Cyanite

Markus is an entrepreneur at the intersection of AI and music. Through his work in his company Cyanite, he aims to help democratize access to high tech. With a background in business administration and the independent music industry, Markus has published a number of articles on innovation and business modeling in prestigious music industry publications and teaches at several universities.

Related posts

AI-Tech Interview with Eric Sugar, President at ProServeIT

AI TechPark

AITech Interview with Aurelien Coq, Product Manager at Esker

AI TechPark

AITech Interview with Yashin Manraj, Chief Executive Officer at Pvotal

AI TechPark