I followed the link from the blog post that said "check out the demo on our product website". Then there's a big button that says "TRY IT FREE". Good, I say. That leads me through a signup process that involves credit cards and whatnot, and then dumps me out on what I guess is the equivalent of the AWS console, not some nice audio test page.
So then I root around in the console, finally find the text to speech stuff, and screw around with various interfaces. None of them seems to be the right thing. Eventually I decide I must have missed something, go back to the product website, and scroll down further to find the "convert your speech to text right now". Great, say I.
The blog post explicitly talks about video. I want to see if it can transcribe a talk I did, so I tried uploading a file; nothing appears to happen on Firefox. I try a couple more times. I sigh heavily and switch to Chrome.
It does appear to work on Chrome, but it's entirely infuriating. I tried uploading a video file, which was over 50MB, so it refused. I then figured out how to extract the audio alone and uploaded that, at which point it complained it was over a minute. Then I find another incantation to chop my audio to a minute (which they just should have done for me, and which anyway should be explained in the interface).
Finally, I upload 60 seconds of audio. And nothing fucking happens. After all that, the thing just doesn't doesn't work. No error messages, no anything.
This is my first impression of the Google Cloud Platform, and all I hear is the squeaking of clown shoes. I'm sure the rest of it can't be this bad, but if they can't make a simple demo work, I'm unlikely to find out.
Update: I decided to try through the Google console, and also try Amazon's speech recognition through the AWS console.
AWS just let me transcribe my MP3 in a pretty straightforward way once I'd uploaded it to an S3 bucket. The transcript is done in 2-3x real time, and the quality seems decent. It comes as a complex JSON file with confidence numbers and timestamps for every word, with alternate words when it knows it isn't sure. It's pretty neat.
Google made me use a sort of query builder interface to construct an API request. The query builder did not actually match the features announced in the blog post, so I just tried going with what was there. When I eventually got a valid-looking request, it blew up because it turns out it can't parse MP3s. So then I reencoded to FLAC and uploaded that. I tried a variety of queries, but none of them worked. The one that got closest complained about a bad value for a field the query builder apparently would not let me add.
I gave up. Squeak, squeak, squeak!
And I should add that the people I know at Google are all perfectly smart, so I don't want anybody to think I'm saying that the individual engineers who made this are dumb or bad. This seems like a giant organizational failure, where what gets built is deeply disconnected from user need and the lived user experience.
Normally when I get insight on a place where this happens, the priority is not actually delivering value, but making managers look good according to easily measured but harmful metrics, like, "Are we at competitive parity at a feature checklist level?" or "Did we launch by some made-up deadline so that a manager could claim success?"
If anybody at Google wants to send me their horror stories, please do email or DM me on Twitter. I'd love to know what the hell happened here, and I promise to keep things as confidential as you like.
This is what you get when you don't have a top-down management style. Teams are given too much freedom, leading to inconsistency. No one has the whole vision of how a product should work. Even if they did have that, they likely don't have the authority to make it happen.
I've tried their transcription API. The quality is awful, to the point where the output of words is completely different depending on chopping off a few initial or trailing seconds of the audio at a time.
Thanks for "the squeaking of clown shoes". I'll have to remember that.
How long before we can get a kodi plugin that transcribes the text and translates to the subtitle language you have chosen. I would really be interested in this for Japanese, Korean and Chinese shows that I have to wait sometime months or years before fansubs are available. Though because of Netflix english subs are being available a lot quicker than previously for many of these shows.
.. there's nothing stopping you from writing it, it's just a few calls to their Google cloud api. it's not an afternoon's work in the scripting language of your choice. the real issue is that I suspect translation might be a bit off at times
I wonder how much it will take until countries will require Telecom companies to transcribe and store all the phone calls for a "limited time period" of, let's say, 6 months, for "our security".
And then run algorithms on these texts to classify the conversations into "potentially crime related discussions" classes.
If they force the carriers to do it, then they likely have to deal with subpoenas or other documentation. If the government (of any country) does it themselves, that's way easier - https://infogalactic.com/info/ECHELON
Met Dan at an AI conference & having worked with the API, I think it's really cool that your average dev has access to this level of Transcription that's a non-trivial problem (been working on Speech Recognition since early '00s).
I agree with some of the comments regarding Google being a big co & having big co issues. But at the core of it, the team, the offering & attention to what matters is solid.
It's certainly going to open up a whole new realm of possibilities.
the number of voice based startups that have built business logic on top of this fundamental api is staggering. some names: voicera (automated meeting minutes), voiceops (call center call analysis), chorus.ai (phone call analytics)
the focus on improving call center performance is where the money is. plenty more vendors will enter this market.
I followed the link from the blog post that said "check out the demo on our product website". Then there's a big button that says "TRY IT FREE". Good, I say. That leads me through a signup process that involves credit cards and whatnot, and then dumps me out on what I guess is the equivalent of the AWS console, not some nice audio test page.
So then I root around in the console, finally find the text to speech stuff, and screw around with various interfaces. None of them seems to be the right thing. Eventually I decide I must have missed something, go back to the product website, and scroll down further to find the "convert your speech to text right now". Great, say I.
The blog post explicitly talks about video. I want to see if it can transcribe a talk I did, so I tried uploading a file; nothing appears to happen on Firefox. I try a couple more times. I sigh heavily and switch to Chrome.
It does appear to work on Chrome, but it's entirely infuriating. I tried uploading a video file, which was over 50MB, so it refused. I then figured out how to extract the audio alone and uploaded that, at which point it complained it was over a minute. Then I find another incantation to chop my audio to a minute (which they just should have done for me, and which anyway should be explained in the interface).
Finally, I upload 60 seconds of audio. And nothing fucking happens. After all that, the thing just doesn't doesn't work. No error messages, no anything.
This is my first impression of the Google Cloud Platform, and all I hear is the squeaking of clown shoes. I'm sure the rest of it can't be this bad, but if they can't make a simple demo work, I'm unlikely to find out.
AWS just let me transcribe my MP3 in a pretty straightforward way once I'd uploaded it to an S3 bucket. The transcript is done in 2-3x real time, and the quality seems decent. It comes as a complex JSON file with confidence numbers and timestamps for every word, with alternate words when it knows it isn't sure. It's pretty neat.
Google made me use a sort of query builder interface to construct an API request. The query builder did not actually match the features announced in the blog post, so I just tried going with what was there. When I eventually got a valid-looking request, it blew up because it turns out it can't parse MP3s. So then I reencoded to FLAC and uploaded that. I tried a variety of queries, but none of them worked. The one that got closest complained about a bad value for a field the query builder apparently would not let me add.
I gave up. Squeak, squeak, squeak!
And I should add that the people I know at Google are all perfectly smart, so I don't want anybody to think I'm saying that the individual engineers who made this are dumb or bad. This seems like a giant organizational failure, where what gets built is deeply disconnected from user need and the lived user experience.
Normally when I get insight on a place where this happens, the priority is not actually delivering value, but making managers look good according to easily measured but harmful metrics, like, "Are we at competitive parity at a feature checklist level?" or "Did we launch by some made-up deadline so that a manager could claim success?"
If anybody at Google wants to send me their horror stories, please do email or DM me on Twitter. I'd love to know what the hell happened here, and I promise to keep things as confidential as you like.
Thanks for "the squeaking of clown shoes". I'll have to remember that.
And then run algorithms on these texts to classify the conversations into "potentially crime related discussions" classes.
https://www.theguardian.com/commentisfree/2013/may/04/teleph...
I agree with some of the comments regarding Google being a big co & having big co issues. But at the core of it, the team, the offering & attention to what matters is solid.
It's certainly going to open up a whole new realm of possibilities.
Interesting name change. It’s certainly more precise, but was “Speech API” really confusing people?
the focus on improving call center performance is where the money is. plenty more vendors will enter this market.