Readit News logoReadit News
modeless · 2 years ago
XTTSv2 is only slightly behind StyleTTS 2 near the top of the TTS Arena leaderboard, though they are both far behind Eleven Labs: https://huggingface.co/spaces/TTS-AGI/TTS-Arena

Personally I prefer StyleTTS 2, and it has a better license. But XTTSv2 has a streaming mode with pretty low latency which is nice. I did run into hallucination issues though. It will hallucinate nonsense words or insert extra syllables in words, pretty frequently.

As others mentioned they shut down so there won't be any updates to XTTS.

eginhard · 2 years ago
They just shared the paper for XTTS, which got accepted to Interspeech and might be the reason for this being posted now: https://arxiv.org/abs/2406.04904
jsemrau · 2 years ago
Interesting. I got quite good results for my longform substack by combining xTTS2 with Nvidia's Nemo.
WhitneyLand · 2 years ago
Anyone have a sense for how these compare to OpenAI’s TTS?
jonahx · 2 years ago
Somewhat unrelated, but given that anyone can vote anonymously, how is the TTS-Arena protecting itself against bots or even rings of humans gaming the system?
modeless · 2 years ago
Low stakes, I guess
vessenes · 2 years ago
NB: Coqui is no longer actively maintained. I’m not sure what the team is up to now. The open market is definitely in need of an upgraded TTS offering; eleven labs is far ahead at the moment.
eginhard · 2 years ago
We do maintain a fork, mostly with bug fixes for now: https://github.com/idiap/coqui-ai-TTS PRs welcome :)
dlx · 2 years ago
Any progress on the license situation? I'd love to work more on it, but worried about it being a bit of a dead end due to uncertainty about the future of the license and not being able to use it in any commercial projects.
personjerry · 2 years ago
Not surprising. When I was researching options for a client I tried a few companies including ElevenLabs and Play.ht, each seemed happy to talk to us... except Coqui. I think I went as far as reporting bugs to them, just to have them aggressively ignore me. I guess they're more of a research team than a business?
jokethrowaway · 2 years ago
They were very friendly and welcoming.

The main problem is quality, Eleven labs is so far ahead, even though their API is not very flexible.

Meta's Voicebox is the only other option that feels realistic - but it's for research only for now.

phyce · 2 years ago
Coqui is great, but another fantastic tool for TTS I recommend checking out is Piper. The voice quality is great, it's extremely lightweight, and it's fast enough to generate TTS in realtime https://github.com/rhasspy/piper
dv35z · 2 years ago
Can you suggest (1) How to get it working on a Mac, (2) alternatively, how to get it running in a Docker container (on a mac)?
mlboss · 2 years ago
Works with rhel9 docker image and compiled binary link
huskyr · 2 years ago
Piper seems very interesting, but unfortunately the last time i tried it on macOS it didn't seem to work (anymore).
nishithfolly · 2 years ago
This was a great team. Sad to see they had to shut down.
ks2048 · 2 years ago
I don't know anything about the startup/VC world, but does anyone have insight on why this failed? It seemed to be one of the highest profile TTS projects and I thought money was just pouring into AI startups.
eginhard · 2 years ago
Some insights from one former lead: https://erogol.com/2024/01/09/goodsandbadsofopensource

TLDR: Making money from open-source is hard.

satvikpendem · 2 years ago
How does it compare to this recent Show HN, MARS5 [0]? Coqui is not maintained anymore so I'd be interested in what the SOTA is for open source TTS.

[0] https://news.ycombinator.com/item?id=40616438

SubiculumCode · 2 years ago
I have a pet ML project that I am doing for fun. I am trying to build a custom transcription and diarizer model for a friend's podcast[1]. My initial solution involved a straight forward implementation using Whisper medium for transcription, and Nemo for diarizing, based on [1]. The results are not bad generally, but since my application involves a fixed set of five known speakers, I thought surely I could fine tune the nemo (or pyannote) diarizer model on their voices to improve accuracy.

Audio samples are easily obtained from their podcast, but manual data labeling is painful for a hobby activity. Further, from what I understand, the real difficulty in performant diarizer models is not speaker recognition generally, but specifically speaker recognition while there is overlapping speech between multiple speakers. I am not even sure how to best implement a labeling procedure for segments with overlapping speech.

I started to wonder whether I might bootstrap a decent sample by leveraging TTS vocal cloning models to simulate the five speakers in dialogues with overlapping speech segments. So I ask HN, is this hopelessly naive, or potentially useful technique? Also, any other advice?

[1] https://www.3d6downtheline.com/ [2] https://github.com/MahmoudAshraf97/whisper-diarization/

tarasglek · 2 years ago
Unclear from docs, does your solution support inferring number of speakers from audio? Found it a bit frustrating that this wasn't automatic in diarization algos I tried last year
SubiculumCode · 2 years ago
The solution that this GitHub provided automatically determines the number of speaker labels, but will often create extra speaker classes for a few exerpts in the stream. You can prespecify the number of speakers I believe for better performance.
ackprakhack · 2 years ago
We've just opensourced MARS5 and are bullish about it's ability to capture very hard prosody -- hopefully you can validate the results and grow alongside its community.

We tend to agree, the time for just one company to be seriously doing speech is over. It needs to be more diverse, and needs to be opensource https://github.com/Camb-ai/MARS5-TTS

BenRacicot · 2 years ago
If we could run this locally (Win and Mac) it could reset the standard for accessibility.
vijucat · 2 years ago
I absolutely love how good the voices are in the VCTK-VIS dataset (109 of them!). I found it easy to install Coqui on WSL and it is able to use CUDA + the GPU quite effectively. p236 male and p237 female are my choices, but holy cow, 109 quality voices still blows my mind. Crazy how you had to pay for a good TTS just a year ago, but now, it's commoditized. Hope you find this useful:

    CUDA_VISIBLE_DEVICES="0" python TTS/server/server.py --model_name tts_models/en/vctk/vits --use_cuda True


 def play_sound(response):
     #learning : you have to use a semaphore to serialize calls to winsound.PlaySound(), which freaks out with "Failed to play sound" if you try to play 2 clips at once
     semaphore.acquire()
     try:
         winsound.PlaySound(response.content, winsound.SND_MEMORY | winsound.SND_NOSTOP)
     finally:
         # Always release the permit, even if PlaySound raises an exception
         semaphore.release()