Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
0
25.2k
Hello and welcome to a audio dataset consisting of one single episode of a non-existent podcast. Or, it eh, I may append this to a podcast that I set up recently regarding my with my thoughts on speech tech and AI in particular. More AI and generative AI I would, I would say. But in any event, the purpose of this voice recording is actually to create a lengthy voice sample for a quick evaluation, a back of the envelope evaluation as they might say, for different speech to text models. And I'm doing this because I I thought I'd made a great breakthrough in my journey with speech tech. And that was succeeding in the elusive task of fine-tuning Whisper. Whisper is, and I'm going to just talk, I'm trying to mix up, I'm going to try a few different styles of speaking. I might whisper something at some points as well. And I'll go back to speaking loud in different parts. I'm going to sound really like a crazy person because I'm also going to try to speak at different pitches and cadences in order to really try to put a speech to text model through its paces, which is trying to make sense of "is this guy just rambling on incoherently in one long sentence?" Or "are these just actually a series of step standalone stepalone standalone sentences?" And how is it going to handle stepalone?! That's not a word! What happens when you use speech to text and you use a fake word and then you're like, wait, that's not actually, that word doesn't exist. How does AI handle that? And these and more are all the questions that I'm seeking to answer in this training data. Now, why did why was I trying to fine tune whisper? And what is Whisper? As I said, I'm going to try to record this at a couple of different levels of technicality - for folks who are in the normal world and not totally stuck down the rabbit hole of AI. Which I have to say is a really wonderful rabbit hole to be to be down. It's a really interesting area. And speech and voice tech is the aspect of it that I find actually most - I'm not sure I would say the most interesting because there's just so much that is fascinating in AI. But the most that I find the most personally transformative in terms of the impact that it's had on my daily work life and productivity and how I sort of work. And I am persevering hard with the task of trying to get a good solution working for Linux. Which if anyone actually does listen to this not just for the training data and for the actual content, this has sparked. I had, besides the fine tune not working, well that was the failure. I used Claude Code. Because one thinks these days that there is nothing short of solving, you know, the reason of life or something that Claude and agentic AI can't do. Which is not really the case. It does seem that way sometimes. But it fails a lot as well. And this is one of those instances where last week I put together an hour of voice training data: basically speaking just random things for three minutes. And it was actually kind of tedious because the texts were really weird. Some of them were it was like, it was AI generated. I tried before to read Sherlock Holmes for an hour and I just couldn't, I was so bored after 10 minutes that I was like, "okay, no, I'm just gonna have to find something else to read." So I used I created with AI Studio, vibe coded, a synthetic text generator, which actually I thought was probably a better way of doing it because it would give me more short samples with more varied content. So I was like, okay, give me a voice note. Like I'm recording an email. Give me a short story to read. Give me prose. So I came up with all these different things and I added a little timer to it so I could see how close I was to one hour. And I spent like an hour one afternoon or probably two hours by the time you do retakes and whatever because you want to. It gave me a source of truth which I'm not sure if that's the scientific way to approach this topic of gathering training data but I thought made sense. I have a lot of audio data from recording voice notes which I've also kind of used been experimenting with using for a different purpose. It's slightly different - annotating task types. It's more text classification experiment. Or well it's more than that actually I'm working on a voice app. So it's a prototype I guess is really more accurate. But you can do that and you can work backwards. You listen back to a voice note and you painfully go through one of those - transcribing where you start and stop and scrub around it and you fix the errors . But it's really really boring to do that. So I thought it would be less tedious in the long term if I just recorded the source of truth. So it gave me these three minute snippets. I recorded them and saved an MP3 and a TXT in the same folder and I created an hour of that data. So I was very hopeful - quitely, you know, a little bit hopeful - that I would be able that I could actually fine tune Whisper. I want to fine tune Whisper because when I got into voice tech last November my wife was in the US and I was alone at home. And when crazy people like me do really wild things like use voice to tech technology that was basically when I started doing it. I didn't feel like a crazy person speaking to myself. And my expectations weren't that high. I used speech tech now and again tried it out. I was like "it'd be really cool if you could just like speak into your computer." And whatever I tried out that had Linux support was just - it was not good, basically. And this blew me away from the first go. I mean it wasn't 100% accurate out of the box. And it took work. But it was good enough that there was a solid foundation. And it kind of passed that pivot point that it's actually worth doing this. You know, there's a point where it's. So like the transcript is you don't have to get 100% accuracy for it to be worth your time for speech to text to be a worthwhile addition to your productivity. But you do need to get above let's say I don't know 85%. If it's 60% or 50% you inevitably say "screw it I'll just type it."Because you end up missing errors in the transcript and it becomes actually worse. You end up in a worse position than you started with it. That's been my experience. So I was like "oh, this is actually really really good. Now how did that happen?" The answer is ASR, Whisper being open-sourced. and the transformer architecture if you want to go back to the to the underpinnings. Which really blows my mind. And it's on my list to read through that paper 'All You Need Is Attention' as attentively as can be done with my limited brain. Because it's super high-level stuff - super advanced stuff I mean. But that I think of all the things that are fascinating about the sudden rise and AI and the dramatic capabilities I find it fascinating that few people are like "hang on, you've got this thing that can speak to you like a chatbot - an LLM. Then you've got image generation. Okay, so firstly those two things on the surface have nothing in common. So like, "how are they ... how did THAT just happen all at the same time?" And then when you extend that further you're like Suno right. You can sing a song and AI will like come up with an instrumental. And then you've got Whisper. And then you're like "wait a second how did all this stuff like if it's all AI what's like, there has to be some commonality. Otherwise these are four these are totally different technologies on the surface of it and the transformer architecture is as far as I know the answer. And I can't even say I can't even pretend that I really understand what the transformer architecture means in depth. But I have scanned it. And as I said I want to print it and really kind of think over it's at some point. And I'll probably feel bad about myself I think! Because weren't those guys in their in their 20s like? That's crazy! I think I asked ChatGPT once "who were the? Who wrote that paper and how old were they when it was published in Arxiv?" And I was expecting like, I don't know. What do you what do you imagine? I personally imagine kind of like you know you have these breakthroughs during COVID and things like that where like these kind of really obscure scientists who are like in their 50s and they've just kind of been laboring in labs and wearily writing and publishing in kind of obscure academic publications and they finally like hit a big or win a Nobel Prize. And then they're household household names. So I that was kind of what I had in mind. That was the mental image I'd formed of the birth of Arxiv. Like, I wasn't expecting 20-somethings in San Francisco! Though I thought that was both very very funny, very cool, and actually kind of inspiring. It's nice to think that people who you know just you might put them in the kind of milieu or bubble or world that you are in or credibly in through you know the series of connections that are coming up with such literally world changing innovations. So that was I thought anyway that that was cool. Okay voice training data. How are we doing? We're about 10 minutes. And I'm still talking about voice technology! So Whisper was brilliant. And I was so excited that I was my first instinct was to like guess it's like "oh my gosh I have to get like a really good microphone for this." So I didn't go on a spending spree because I said I'm gonna have to just wait a month and see if I still use this." And it just kind of became it's become really part of my daily routine. Like, if I'm writing an email I'll record a voice note and then I'll develop it and it's nice to see that everyone is like developing the same things in parallel. Like, that's maybe kind of a weird thing to say. But when I look, I kind of came when I started working on this these prototypes on GitHub, which is where I just kind of share very freely and loosely ideas and you know first iterations on concepts. And for want of a better word I called it like "LLM post processing." Or cleanup. Or basically a system prompt that after you get back the raw text from Whisper, you run it through model and say "okay this is crappy text like add sentence structure and you know fix it up. " And now when I'm exploring the different tools that are out there that people have built, I see quite a number of projects have basically you know done the same thing. Lest that be misconstrued, I'm not saying for a millisecond that I inspired them. I'm sure this has been a thing that's been integrated into tools for a while. But it's, it's the kind of thing that when you start using these tools every day the need for it is almost instantly apparent. Because text that doesn't have any punctuation or paragraph spacing takes a long time to you know, it takes so long to get it into a presentable email that,again, it moves speech tech into that, before that inflection point where you're like "nah it's just not worth." It it's like it'll just be quicker to type this. So it's it's a big - it's a little touch that actually is a big deal So I was on Whisper and I've been using Whisper and I kind of early on find a couple of tools. I couldn't find what I was looking for on Linux which is basically just something that'll run in the background. You'll give it an API key and it'll just like transcribe. With like a little key to start and stop the dictation. And the issues wer I discovered that like most people involved in creating these projects were very much focused on local models. And running Whisper locally because you can. And I tried that a bunch of times and just never got results that were as good as the cloud. And when I began looking at the cost of the speech to text APIs and what I was spending just thought there it's actually in my opinion just one of the better deals in API spending and in cloud. Like, it's just not that expensive for very, very good models that are much more. You know, you're going to be able to run the full model, the latest model versus whatever you can run on your average GPU. Unless you want to buy a crazy GPU. It doesn't really make sense to me. Now, I privacy is another concern that I know is kind of like a very much a separate thing. That people just don't want their voice data and their voice leaving their local environment. Maybe for regulatory reasons as well. But I'm not in that. I'm don't really really care about people listening to my grocery list consisting of reminding myself that I need to buy more beer, Cheetos and hummus. Which is kind of the three three staples of my diet during periods of poor nutrition. But the kind of stuff that I transcribe most it's just not it's not a it's not a privacy thing. I'm not that sort of sensitive about. And I don't do anything so you know sensitive or secure that requires airgapping. So I looked at the pricing and especially the kind of older models mini. Some of them are very very affordable. And I did a back of the, I did a calculation once with ChatGPT and I was like "okay, this is the, this is the API price for I can't remember whatever the model was. Let's say I just go at it like nonstop which rarely happens. Probably I would say an average I might dictate 30 to 60 minutes per day if I was probably summing up the emails, documents, outlines. Which is a lot. But it's it's still a fairly modest amount. And I was like well some days I do go on like one or two days where I've been usually when I'm like kind of out of the house and just have something like I've nothing else to do. Like if I'm at a hospital. We have a newborn. And you're waiting for like hours and hours for an appointment. And I would probably have listened to podcasts before becoming a speech fanatic. And I'm like "oh wait let me just get down let me just get these ideas out of my head." And that's when I'll go on my speech binges. But those are like once every few months - like not frequently. But I said okay let's just say if I'm gonna price out cloud STT. If I was like dedicated every second of every waking hour to transcribing for some odd reason. I mean, I'd have to like eat and use the toilet! Like, you know there's only so many hours I'm awake for. So like let's just say a maximum of like 40 hour 45 minutes in the hours and I said all right let's just say 50. Who knows? You're dictating on the toilet! We do it! So you could just do 60. But whatever I did - and every day. Like you're going flat out, seven days a week dictating nonstop as like "what's my monthly API bill gonna be at this price?" And it came out to like 70 or 80 bucks. And I was like, well that would be an extraordinary amount of dictation! And I would hope that there was some compelling reason more worth more than 70 dollars that I embarked upon that. So given that that's kind of the max point for me I said that's actually very very affordable. Now you're gonna if you want to spec out the costs and you want to do the post processing that I really do feel is valuable that's gonna cost more as well. Unless you're using Gemini which needless to say as a random person sitting in Jerusalem I have no affiliation nor with Google nor Anthropic nor Gemini nor any major tech vendor for that matter. Um I like Gemini not so much as a everyday model. Um it's kind of underwhelmed in that respect I would say. But for multimodal I think it's got a lot to offer. And I think that the transcribing functionality whereby it can um process audio with the system prompt and both give you a transcription that's cleaned up - that reduces two steps to one. And that for me is a very very big deal. And uh I feel like even Google hasn't really sort of thought through how useful the that modality is and what kind of use cases uh you can achieve with it. Because I found in the course of this year just an endless list of really kind of system prompt system prompt stuff that I can say "okay I've used it to capture context data for AI which is literally I might speak for if I wanted to have a good bank of context data about who knows my childhood uh more realistically maybe my career goals something that would just be like really boring to type out so I'll just like sit in my car and record it for 10 minutes. And that 10 minutes you get a lot of information in um emails which is short text uh just there is a whole bunch. And all these workflows kind of require a little bit of treatment afterwards and different treatment. My context pipeline is kind of like just extract the bare essential. So you end up with me talking very loosely about sort of what I've done in my career, where I've worked, where I might like to work. And it goes - it condenses that down to very robotic language that is easy to chunk, parse, and maybe put into a vector database. "Daniel has worked in technology! Daniel is a has been working in marketing." Stuff like that. That's not how you would speak um but I figure it's probably easier to parse for, after all, robots. So we've almost got to 20 minutes. And this is actually a success because I wasted 20 minutes of my uh of the evening speaking into microphone and the levels were shot and it uh it was clipping. And I said I can't really do an evaluation. I have to be fair. I have to give the models a chance to do their thing. Uh what am I hoping to achieve in this? Okay my fine tune was a dud as mentioned. Deepgram STT - I'm really really hopeful that this prototype will work. And it's a build in public open source. So anyone is welcome to use it if I make anything good But what was really exciting for me last night when after hours of um trying my own prototypes, seeing someone just made something that works like that. You know, you're not going to have to build a custom Conda environment and image. I have AMD GPU which makes things much more complicated. I didn't find it And I was about to give up and I said "all right. Let me just give Deepgram's Linux thing a shot and if this doesn't work um I'm just going to go back to trying to vibe code something myself." And when I ran the script - I was using Claude Code to do the installation process - it ran the script and "oh my gosh, it works!" Just like that! Uh the tricky thing for all those ones who want to know all the nitty gritty details um was that I don't think it was actually struggling with transcription but pasting. Wayland makes life very hard. And I think there was something not running at the right time. Anyway, Deepgram - I looked at how they actually handled that because it worked out of the box when other stuff didn't. And it was quite a clever little mechanism and but more so than that the accuracy was brilliant. Now, what am I doing here? This is going to be a 20 minute audio uh sample and I'm I think I've done one or two of these before but I did it with short, snappy voice notes. This is kind of long form. This actually might be a better approximation for what's useful to me than voice memos like "I need to buy three liters of milk tomorrow and pita bread." Which is probably how like half my voice note voice notes sound. Like if anyone were to I don't know like find my phone they'd be like "this is the most boring person in the world!" Although actually there are some like kind of uh journaling thoughts as well. But it's a lot of content like that. And the probably for the evaluation the most useful thing is slightly obscure tech: Github, Nuclino, Hugging Face. Not so obscure that it's not going to have a chance of knowing it. But hopefully sufficiently well known that the models should get it. Uh I tried to do a little bit of speaking really fast and speaking very slowly. I would say in general I've spoken delivered this at a faster pace than I usually would owing to strong coffee flowing through my bloodstream. And the thing that I'm not going to get in this benchmark is background noise. Which in my first take that I had to get rid of my wife came in with my son for a good night kiss. And that actually would have been super helpful to get in because it was non-diarised. Or if we had diarisation a female I could say I want the male voice and that wasn't intended for transcription um. And we're not going to get background noise like people honking their horns. Which is something I've done in my main dataset where I am trying to go back to some of my voice notes, annotate them, and run a benchmark. But this is going to be just a pure quick test. And as someone I'm working on a voice note idea that's my sort of end motivation besides thinking it's an absolute outstanding technology that's coming to viability and really - I know this sounds cheesy - can actually have a very transformative effect. It's, you know, voice technology has been life changing for folks living with disabilities. And I think there's something really nice about the fact that it can also benefit you know folks who are able-bodied. And like we can all in different ways um make this tech as useful as possible regardless of the exact way that we're using it um. And I think there's something very powerful in that. And it can be very cool um I see huge potential. What excites me about voice tech - a lot of things actually. Firstly the fact that it's cheap and accurate as I mentioned at the very start of this um. And it's getting better and better with stuff like accent handling um. I'm not sure my my fine tune will actually ever come to fruition in the sense that I'll use it day to day as I imagine and get like superb flawless words error rates. Because I'm just kind of skeptical about local speech to tech as I mentioned. And I think the pace of innovation and improvement in the models. The main reasons for fine tuning from what I've seen have been people who are something that really blows blows my mind about ASR is the idea that it's inherently a-llingual. Or multilingual. Phonetic-based. So as folks who use speak very obscure languages that there may be there there might be a paucity of training data or almost none at all. And therefore the accuracy is significantly reduced. Or folks in very critical environments. I know there are this is used extensively in medical transcription and dispatcher work as um you know the call centers who send out ambulances etc where accuracy is absolutely paramount. And in the case of doctors, radiologists they might be using very specialized vocab all the time. So those are kind of the main two things. And I'm not sure that really just for trying to make it better on a few random tech words with my slightly. I mean, I have an accent! But like, not you know an accent that a few other million people have. Ish. I'm not sure that my little fine tune is going to actually like the bump in word error reduction if I ever actually figure out how to do it and get it up to the cloud. By the time we've done that I suspect that the next generation of ASR will just be so good that it will kind of be "nah, well, that would be cool if it worked out. But I'll just use this instead." So that's going to be it for today's episodes of voice training data single long shot evaluation. Who am I going to compare? Whisper is always good as a benchmark. But I'm more interested in seeing Whisper head-to-head with two things really. One is Whisper variants. So you've got these projects like Faster Whisper, Distilled Whisper. It's a bit confusing. There's a whole bunch of them. And the emerging ASRs which are also a thing. My intention for this is I'm not sure I'm going to have the time in any point of the foreseeable future to go back through this whole episode and create a proper source truth or I fix everything. I might do it if I can get one transcription that's sufficiently close to perfection. But what I would actually love to do on Hugging Face I think would be a great probably how I might visualize this is having the audio waveform play. And then have the transcript for each model below it. And maybe even a like you know to scale. And maybe even a local one as well like local Whisper versus Open AI API etc. And I can then actually listen back to segments. Or anyone who wants to can listen back to segments of this recording and see where a particular model struggled while others didn't, as well as the sort of headline finding of which had the best WER. But that would require the source of truth. Okay, that's it. Hope this was, I don't know, maybe useful for other folks interested in STT. You want to see - that I always feel think I've just said as something I didn't intend to. STT I said for those listening carefully! Including hopefully the models themselves! This has been myself Daniel Rosehill. For more um jumbled repositories about my uh roving interests in AI. But particularly agentic AI, MCP, and voice tech, you can find me on Github, Hugging Face. Where else? DanielRosehilll.com which is my personal website. As well as this podcast whose name I sadly cannot remember! Until next time, thanks for listening!
Hello and welcome to a audio data set consisting of one single episode of a non-existent podcast or it uh i may append this to a podcast that i set up recently um regarding my uh with my thoughts on speech tech and ai in particular more ai and generative ai i would uh i would say but in any event the purpose of this um voice recording is actually to create a lengthy voice sample for a quick evaluation, a back of the envelope evaluation, as they might say, for different speech to text models. And I'm doing this because I thought I'd made a great breakthrough in my journey with speech tech, and that was succeeding in the elusive task of fine tuning Whisper. Whisper is, and I'm going to just talk i'm trying to mix up uh i'm going to try a few different styles of speaking i might whisper something at some points as well and i'll go back to speaking loud in uh in different parts i'm going to sound really like a crazy person because i'm also going to try to speak at different pitches and cadences in order to really try to put a speech attacks model through its paces which is trying to make sense of is this guy just rambling on incoherently in one long sentence or are these just actually a series of step, standalone, step alone, standalone sentences. And how is it going to handle step alone? That's not a word. What happens when you use speech to text and you use a fake word? And then you're like, wait, that's not actually, that word doesn't exist. How does AI handle that? And these and more are all the questions that I'm seeking to answer in this training data. Now, why did, why was it trying to fine tune Whisper? what is whisper as i said i'm gonna try to uh record this at a couple of different levels of technicality for folks who are uh you know in the normal uh world and not totally stuck down the rabbit hole of ai which i have to say is a really wonderful uh rabbit hole to be to be down um it's a really interesting area and speech and voice tech is is the aspect of it that i find actually most i'm not sure i would say the most interesting because there's Just so much that is fascinating in AI. But the most that I find the most personally transformative in terms of the impact that it's had on my daily work life and productivity and how I sort of work. And I'm persevering hard with the task of training, I guess, a good solution working for Linux, which if anyone actually does listen to this, not just for the training data and for the actual content, this is this is sparked. I had besides the fine-tune not working well that was the failure um i used plod code because one thinks these days that there is nothing short of solving you know the uh the reason of life or something uh that plod and agentic ai can't do uh which is not really the case uh it does seem that way sometimes but it fails a lot as well and this is one of those instances where last week I put together an hour of voice training data, basically speaking, just random things for three minutes. And it was actually kind of tedious because the texts were really weird. Some of them were, it was like, it was AI generated. I tried before to read Sherlock Holmes for an hour and I just couldn't, I was so bored after 10 minutes that I was like, okay, I know I'm just going to have to find something else to read. So I used... a created with AI studio vibe coded a synthetic text generator which actually I thought was probably a better way of doing it because it would give me more short samples with more varied content so I was like okay give me a voice note like I'm recording an email give me a short story to read give me prose to read so it came up with all these different things and they added a little timer to it so I could see. how close i was to one hour um and uh i spent like an hour one afternoon or probably two hours by the time you um you do retakes and whatever because you want to it gave me a source of truth which i'm not sure if that's the scientific way to approach this topic of gathering uh training data but i thought made sense um i have a lot of audio data from recording voice notes which I've also kind of used Bean. experimenting with using for a different purpose slightly different annotating task types it's more text classification experiment or Well, it's more than that, actually. I'm working on a voice app. So it's a prototype, I guess, is really more accurate. But you can do that and you can work backwards. You're like, you listen back to a voice note and you painfully go through one of those transcribing, you know, where you start and stop and scrub around it and you fix the errors. But it's really, really boring to do that. So I thought it would be less tedious in the long term if I just recorded the source of truth. So it gave me these three minute snippets. I recorded them and saved an MP3 and a TXT in the same folder. And I created an error of that data. So I was very hopeful, quietly, you know, a little bit hopeful that I would be able that I could actually fine tune Whisper. I want to fine tune Whisper because when I got into voice tech last November, my wife was in the US and I was alone at home and, you know, went crazy. people like me do really wild things like use voice to tech technology that was basically when I started doing it I didn't feel like a crazy person speaking to myself and my expectations weren't that high I used speech tech now and again tried it out I was like it'd be really cool if you could just like speak into your computer and whatever I tried out that had support was just it was not good basically And this blew me away from the first go. I mean, it wasn't 100% accurate out of the box and it took work, but it was good enough that there was a solid foundation and it kind of passed that pivot point that it's actually worth doing this. You know, there's a point where it's so like the transcript is you don't have to get 100% accuracy for it to be worth your time, for a speech to text to be a worthwhile addition to your productivity. But you do need to get above, let's say, I don't know, 85%. percent. If it's 60% or 50%, you inevitably say, screw it, I'll just type it because you end up missing errors in the transcript and it becomes actually worse. You end up in a worse position than you started with it. That's been my experience. So I was like, oh, this is actually really, really good now. How did that happen? And the answer is ASR, Whisper being open sourced and the transformer architecture. If you want to go back to the to the underpinnings, which really blows my mind. And it's on my list to read through that paper. All you need is attention as attentively as can be done with my limited brain because it's super, super high level stuff. Super advanced stuff, I mean. But that, I think of all the things that are fascinating about the sudden rise in AI and the dramatic capabilities. I find it fascinating that few people are like, hang on, you've got this thing that can speak to you like a chatbot, an LLM. And then you've got image generation. OK, so firstly, those two things on the surface have nothing in common. So like, how are they? How did that just happen all at the same time? And then when you extend that further, you're like Suno, right? You can sing a song and AI will like come up with an instrumental. And then you've got Whisper. And you're like, wait a second. How did all this stuff, like if it's all AI, what's, like there has to be some commonality. Otherwise, these are totally different technologies on the surface of it. And the transformer architecture is, as far as I know, the answer. And I can't even say, can't even pretend that I really understand what the transformer architecture means in depth. But I have scanned this. And as I said, I want to... printed and really kind of think over it at some point and I'll probably feel bad about myself I think because weren't those guys in their in their 20s like that's crazy I think I asked chat gpt once who were the who wrote that paper and how old were they when it was published in arcs if and I was expecting like I don't know what do you what do you imagine I personally imagine kind of like you know you have these breakthroughs during covid and things like that where like these kind of really obscure scientists who are like in their 50s and they've just kind of been laboring in labs and uh wearily and writing and publishing in kind of obscure academic publications and they finally like hit a big or win a noble prize and then their household household names uh so that was kind of what i had in mind that was the mental image i'd formed of the birth of arcs of like i wasn't expecting 20 somethings in san francisco though i i thought that was both very very funny very cool and actually kind of inspiring It's nice to think that people who, you know, just you might put them in the kind of. milieu or bubble or world that you are in or credibly in through you know the series of connections that are coming up with such literally world-changing um innovations uh so that was i thought anyway that's that that was cool okay voice training data how are we doing we're about 10 minutes and i'm still talking about voice technology um so whisper was brilliant and i was so excited that i was my first instinct was to like guess It's like, oh my gosh, I have to get like a really good microphone for this. So I didn't go on a spending spree because I said, I'm going to have to just wait a month and see if I still use this. And it just kind of became, it's become really part of my daily routine. Like if I'm writing an email, I'll record a voice note. And then I've developed. And it's nice to see that everyone is like developing the same things in parallel. Like that's kind of a weird thing to say. But when I look, I... kind of came when i started working on this uh these prototypes on github which is where i just kind of share very freely and loosely uh ideas and you know first iterations on on concepts um and for want of a better word i called it like uh llm post-processing or cleanup or basically a system prompt that after you get back the raw text from whisper you run it through a model and say, okay, this is crappy. text like add sentence structure and you know fix it up and now when I'm exploring the different tools that are out there that people have built I see quite a number of projects have basically you know done the same thing lest that be misconstrued I'm not saying for a millisecond that I inspired them I'm sure this has been a thing that's been integrated into tools for a while but it's It's the kind of thing that when you start using these tools every day, the need for it is almost instantly apparent because text that doesn't have any punctuation or paragraph spacing takes a long time to, you know, it takes so long to get it into a presentable email that again, it's, it's, it, it moves speech tech into that before that inflection point where you're like, nah, it's just not worth it. It's like, it'll just be quicker to type this. So it's a big, it's a little touch that actually. is a big deal. So I was on Whisper and I've been using Whisper and I kind of early on found a couple of tools. I couldn't find what I was looking for on Linux, which is basically just something that'll run in the background. You'll give it an API key and it will just like transcribe with like a little key to start and stop the dictation. And the issues were I discovered that like most people involved in creating these projects were very much focused on local models running whisper locally because you can and i tried that a bunch of times and just never got results that were as good as the cloud and when i began looking at the cost of the speech to text apis and what i was spending i just thought there is it's actually in my opinion just one of the better deals in api spending and in cloud like it's just not that expensive for very very good models That are much more, you know, you're going to be able to run the full model, the latest model versus whatever you can run on your average GPU, unless you want to buy a crazy GPU. It doesn't really make sense to me. Privacy is another concern that I know is kind of like a very much a separate thing that people just don't want their voice data and their voice leaving their local environment, maybe for regulatory reasons as well. But I'm not in that. I'm neither really care. about people listening to my grocery list consisting of reminding myself that I need to buy more beer, Cheetos and hummus, which is kind of the three, three staples of my diet during periods of poor nutrition. But the kind of stuff that I transcribe, it's just not, it's not a, it's not a privacy thing I'm that sort of sensitive about. And I don't do anything so, you know, sensitive or secure that requires air gapping. So. I looked at the pricing and especially the kind of older models, mini, some of them are very, very affordable. And I did a calculation once with ChatGPT and I was like, OK, this is the API price for I can't remember whatever the model was. Let's say I just go at it like nonstop, which it rarely happens. Probably I would say on average I might dictate 30 to 60 minutes per day if I was probably summing up the emails. uh, documents, outlines, um, which is a lot, but it's, it's still a fairly modest amount. And I was like, well, some days I do go on like one or two days where I've been. Usually when I'm like kind of out of the house and just have something like I have nothing else to do. Like if I'm at a hospital, we have a newborn and you're waiting for like eight hours and hours for an appointment. And I would probably have listened to podcasts before becoming a speech fanatic. And I'm like, oh, wait, let me just get down. Let me just get these ideas out of my head. And that's when I'll go on my speech binges. But those are like once every few months, like not frequently. But I said, okay, let's just say if I'm going to price out cloud STT, if I was like dedicated every second of every waking hour to transcribing for some odd reason, um, I mean, it'd have to like eat and use the toilet. Like, you know, there's only so many hours I'm awake for. So like, let's just say a maximum of like 40 hours, 45 minutes in the hour. Then I said, all right, let's just say 50. Who knows? You're dictating on the toilet. We do it. So you could just do 60, but whatever I did and every day, like you're going flat out seven days a week dictating nonstop. I was like, what's my monthly API bill going to be at this price? And it came out to like 70 or 80 bucks. And I was like, well, that would be an extraordinary amount of dictation. And I would hope that there was some compelling reason worth more than $70 that I embarked upon that project. So given that that's kind of the max point for me, I said that's actually very, very affordable. Now, you're going to if you want to spec out the costs and you want to do the post-processing that I really do feel is valuable, that's going to cost some more as well. Unless you're using Gemini, which needless to say, is a random person sitting in Jerusalem. I have no affiliation, nor with Google, nor Anthropic, nor Gemini, nor any major tech vendor for that matter. Um, I like Gemini not so much as a everyday model. Um, it's kind of underwhelmed in that respect, I would say, but for multimodal, I think it's got a lot to offer. And I think that the transcribing functionality whereby it can, um, process audio with a system prompt and both give you transcription that's cleaned up, that reduces two steps to one. And that for me is a very, very big deal. And, uh, I feel like even Google has haven't really sort of thought through how useful the that modality is and what kind of use cases you can achieve with it because i found in the course of this year just an endless list of really kind of system prompt system prompt stuff that i can say okay i've used it to capture context data for ai which is literally i might speak for if i wanted to have a good bank of context data about who knows my childhood. more realistically maybe my career goals something that would just be like really boring to type out so I'll just like sit in my car and record it for 10 minutes and that 10 minutes you get a lot of information in emails which is short text just there is a whole bunch and all these workflows kind of require a little bit of treatment afterwards and different treatment my context pipeline is kind of like just extract the bare essentials so you end up with me talking very loosely about sort of what i've done in my career where i've worked where i might like to work and it goes it condenses that down to very robotic language that is easy to chunk parse and maybe put into a vector database daniel has worked in technology daniel is a has been working in martin you know stuff like that that's not how you would speak um but i figure it's probably easier to parse for, after all, robots. So we've almost got to 20 minutes and this is actually a success because I wasted 20 minutes of the evening speaking into a microphone and the levels were shot and it was clipping and I said I can't really do an evaluation. I have to be fair. I have to give the models a chance to do their thing. What am I hoping to achieve in this? Okay, my fine tune was a dud as mentioned. Deepgram SDT, I'm really, really hopeful that this prototype will work. And it's a built in public open source. So anyone is welcome to use it if I make anything good. But that was really exciting for me last night when after hours of trying my own prototype, seeing someone just made something that works like that, you know, you're not going to have to build a custom conda environment and image. I have AMD GPU, which makes things much more complicated. I didn't find it. And I was about to give up. And I said, all right, let me just give Deepgram's Linux thing. shot and if it doesn't work, I'm just gonna go back to trying to vibe code something myself and when I ran the script I was using cloud code to do the installation process. It ran the script and oh my gosh, it works just like that. The tricky thing for all those who wants to know all the nitty gritty details was that I don't think it was actually struggling with transcription, but pasting, Wayland makes life very hard. And I think there was something not running at the right time. Anyway, Deepgram, I looked at how they actually handled that because it worked out of the... box when other stuff didn't and it was quite a clever little mechanism and but more so than that the accuracy was brilliant now what am i doing here this is going to be a 20 minute audio sample and i'm i think i've done one or two of these before but i did it with short snappy voice notes this is kind of long form this actually might be a better approximation for what's useful to me then voice memos like i need to buy three liters of milk tomorrow and peter bread which is probably how like half my voice note voice notes sound like if anyone were to i don't know like find my phone they'd be like this is the most boring person in the world although actually there are some like kind of uh journaling thoughts as well but it's a lot of content like that and the probably for the evaluation the most useful thing is slightly obscure tech github nucleano uh hugging face not so obscure that it's not going to have a chance of knowing it but hopefully sufficiently well known that the model should get it i tried to do a little bit of speaking really fast and speaking very slowly i would say in general i've spoken delivered this at a faster pace than i usually would owing to strong coffee flowing through my bloodstream and the thing that i'm not going to get in this benchmark is background noise which in my first take that i had to get rid of my wife came in with my son and for a good night kiss And that actually would have been super helpful to get in because it was non-diarized or if we had diarization, a female, I could say, I want the male voice and that wasn't intended for transcription. And we're not going to get background noise like people honking their horns, which is something I've done in my main data set where I am trying to go back to some of my voice notes, annotate them and run a benchmark. But this is going to be just a pure, quick test and As someone working on a voice note idea, that's my sort of end motivation, besides thinking it's an absolutely outstanding technology that's coming to viability. And really, I know this sounds cheesy, can actually have a very transformative effect. It's, you know, voice technology has been life changing for folks living with disabilities. And I think there's something really nice about the fact that it can also benefit. you know, folks who are able-bodied and like we can all in different ways make this tech as useful as possible, regardless of the exact way that we're using it. And I think there's something very powerful in that and it can be very cool. I see huge potential. What excites me about voice tech? A lot of things, actually. Firstly, the fact that it's cheap and accurate, as I mentioned at the very start of this, and it's getting better and better with stuff like accent handling. I'm not sure my fine tune will actually ever come to fruition in the sense that I'll use it day to day, as I imagine. I get like superb, flawless words, error rates, because I'm just kind of skeptical about local speech to text, as I mentioned. And I think the pace of innovation and improvement in the models, the main reasons for fine tuning from what I've seen have been people who are something that really blows my mind about ASR is the idea that it's inherently alingual or multilingual phonetic based so as folks who use speak very obscure languages that there may be very there might be a paucity of training data or almost none at all and therefore the accuracy is significantly reduced or folks in very critical environments i know there you this is used extensively in medical transcription and dispatcher your work as, um, you know, the call centers who send out ambulances, et cetera, where accuracy is absolutely paramount. And in the case of doctors, radiologists, they might be using very specialized vocab all the time. So those are kind of the main two things. And I'm not sure that really just for trying to make it better on a few random tech words with my slightly, I mean, I have an accent, but like not, you know, an accent that a few other million people have it. I'm not sure that. my little fine tune is going to actually like the bump in word error reduction if I ever actually figure out how to do it and get it up to the cloud by the time I've done that I suspect that the next generation of ASR will just be so good that it will kind of be, no, well, that would have been cool if it worked out, but I'll just use this instead. So that's going to be it for today's episode of voice training data. Single, long shot evaluation. Who am I going to compare? Whisper is always good as a benchmark, but I'm more interested in seeing Whisper head-to-head with two things, really. One is Whisper variants. So you've got these... projects like faster whisper uh distill whisper it's a bit confusing there's a whole bunch of them and the emerging asrs which are also a thing my intention for this is i'm not sure i'm going to have the time in any point in the foreseeable future to go back through this whole episode and create a proper source truth where i fix everything might do it if i can get one transcriptions as sufficiently close to perfection but What I would actually love to do on Hugging Face, I think would be a great, probably how I might visualize this is having the audio waveform play and then have the transcript for each model below it. And maybe even a, like, you know, two scale and maybe even a local one as well, like Local Whisper versus OpenAI API, et cetera. And... I can then actually listen back to segments or anyone who wants to can listen back to segments of this recording and see where a particular model struggled and others didn't, as well as the sort of headline finding of which had the best WER, but that would require the source of truth. Okay, that's it. I hope this was, I don't know, maybe useful for other folks interested in STT. You want to see that I always feel, think I've just said as something I didn't intend to. STT, I said for those. listen carefully, including hopefully the models themselves. This has been myself, Daniel Rosehill. For more jumbled repositories about my roving interest in AI, but particularly agentic, MCP and voice tech, you can find me on GitHub, Hugging Face, where else? Danielrosehill.com, which is my personal website, as well as this podcast, whose name I sadly cannot remember. Until next time, thanks for listening.
Hello and welcome to a audio dataset consisting of one single episode of a nonexistent podcast. Or it I may append this to a podcast that I set up recently regarding my with my thoughts on speech tech and A. I. In particular, more A. I. And generative A. I. I would I would say. But in any event, the purpose of this voice recording is actually to create a lengthy voice sample for a quick evaluation, a back of the envelope evaluation, they might say, for different speech attacks models. I'm doing this because I thought I'd made a great breakthrough in my journey with speech tech and that was succeeding in the elusive task of fine tuning whisper. Whisper is, and I'm to just talk, I'm trying to mix up. I'm going to try a few different styles of speaking whisper something at some points as well. And I'll go back to speaking loud in in different parts are going to sound really like a crazy person because I'm also going to try to speak at different pitches and cadences in order to really try to push a speech to text model through its paces, which is trying to make sense of is this guy just rambling on incoherently in one long sentence or are these just actually a series of step standalone, standalone, standalone sentences? And how is it going to handle step alone? That's not a word. What happens when you use speech to text and you use a fake word? And then you're like, wait, that's not actually that word doesn't exist. How does AI handle that? And these and more are all the questions that I'm seeking to answer in this training data. Now, why was I trying to fine tune Whisper? And what is Whisper? As I said, I'm going to try to record this at a couple of different levels of technicality for folks who are in the normal world and not totally stuck down the rabbit hole of AI, which you have to say is a really wonderful rabbit hole to be done. It's a really interesting area and speech and voice tech is is the aspect of it that I find actually most I'm not sure I would say the most interesting because there's just so much that is fascinating in AI. But the most that I find the most personally transformative in terms of the impact that it's had on my daily work life and productivity and how I sort of work. I'm persevering hard with the task of trying to get a good solution working for Linux, which if anyone actually does listen to this, not just for the training data and for the actual content, is sparked. I had, besides the fine tune not working, well that was the failure. I used Claude code because one thinks these days that there is nothing short of solving, you know, the the reason of life or something that clause and agentic AI can't do, which is not really the case. It does seem that way sometimes, but it fails a lot as well. And this is one of those instances where last week I put together an hour of voice training data, basically speaking just random things for three minutes. It was actually kind of tedious because the texts were really weird. Some of them were, it was like it was AI generated. I tried before to read Sherlock Holmes for an hour and I just couldn't, I was so bored after ten minutes that I was like, okay, no, I'm just gonna have to find something else to read. So I used a created with AI Studio, VibeCoded, a synthetic text generator which actually I thought was probably a better way of doing it because it would give me more short samples with more varied content. So I was like, okay, give me a voice note like I'm recording an email, give me a short story to read, give me prose to read. So I came up with all these different things and they added a little timer to it so I could see how close I was to one hour. And I spent like an hour one afternoon or probably two hours by the time you do retakes and whatever because you want to it gave me a source of truth which I'm not sure if that's the scientific way to approach this topic of gathering training data but I thought made sense. I have a lot of audio data from recording voice notes which I've also kind of used, been experimenting with using for a different purpose. Slightly different annotating task types. It's more a text classification experiment or Well, it's more than that actually. I'm working on a voice app. So it's a prototype, I guess, is really more accurate. But you can do that and you can work backwards. Listen back to a voice note and you painfully go through one of those transcribing, where you start and stop and scrub around it and you fix the errors, but it's really, really pouring to do that. So I thought it would be less tedious in the long term if I just recorded the source of truth. So it gave me these three minutes snippets. I recorded them and saved an MP3 and a TXT in the same folder and I created an error that data. So I was very hopeful, quietly, a little bit hopeful that I would be able, that I could actually fine tune Whisper. I want to fine tune Whisper because when I got into voice tech last November, my wife was in the US and I was alone at home. And when crazy people like me do really wild things like use voice to tech technology. That was basically when I started doing it, I didn't feel like a crazy person speaking to myself. And my expectations weren't that high. I'd used speech tech now and again, tried it out. I was like, it'd be really cool if you could just like speak into your computer and whatever I tried out that had Linux support was just, it was not good basically. And this blew me away from the first go. I mean, it wasn't one hundred percent accurate out of the box and it took work, but it was good enough that there was a solid foundation and it kind of passed that pivot point that it's actually worth doing this. You know, there's a point where it's so like, the transcript is you don't have to get one hundred percent accuracy for it to be worth your time for speech to text to be a worthwhile addition to your productivity. But you do need to get above, let's say, I don't know, eighty five percent. If it's sixty percent or fifty percent, you inevitably say, Screw it, I'll just type it. Because you end up missing errors in the transcript and it becomes actually worse. You end up in a worse position than you started with it. That's been my experience. So I was like, Oh, this is actually really, really good now. How did that happen? And the answer is ASR, Whisper being open sourced and the transformer architecture, if you want to go back to the underpinnings, which really blows my mind and it's on my list to read through that paper. All you need is attention as attentively as can be done with my limited brain because it's super super high level stuff, super advanced stuff, mean. That I think of all the things that are fascinating about the sudden rise in AI and the dramatic capabilities, I find it fascinating that few people are like, hang on, you've got this thing that can speak to you like a chatbot, an LLM. And then you've got image generation. Okay. So firstly, two things on the surface have nothing in common. So how did that just happen all at the same time? And then when you extend that further, you're like, Suno. You can sing a song and AI will come up with an instrumental. And then you've got Whisper and you're like, Wait a second. How did all this stuff If it's all AI, there has to be some commonality. Otherwise, are totally different technologies on the surface of it. And the transformer architecture is, as far as I know, the answer. And I can't even say, can't even pretend that I really understand what the transformer architecture means in-depth. But I have scanned this and as I said, I want to print it and really kind of think over it at some point. And I'll probably feel bad about myself, I think, because weren't those guys in twenties? Like, that's crazy. I think I asked ChatGPT once who wrote that paper and how old were they when it was published in ArcSiv? And I was expecting like, I don't know, what do you imagine? I personally imagine kind of like, you you have these breakthroughs during COVID and things like that, where like these kind of really obscure scientists who are in their 50s and they've just kind of been laboring in labs and wearily in writing and publishing in kind of obscure academic publications. And they finally hit a big or win a Nobel Prize and then their household names. So that was kind of what I had in mind. That was the mental image I'd formed of the birth of ArcSim. Like I wasn't expecting twenty somethings in San Francisco. I thought that was both very funny, very cool, and actually kind of inspiring. It's nice to think that people who just you might put them in the kind of milieu or bubble or world that you are in incredibly in through a series of connections that are coming up with such literally world changing innovations. So that was I thought anyway, that's that that was cool. Okay. Voice training data. How are we doing? We're about ten minutes, and I'm still talking about voice technology. So Whisper was brilliant, and I was so excited that my first instinct was to guess, like, Oh my gosh, I have to get a really good microphone for this. So I didn't go on a spending spree because I said, I'm gonna have to just wait a month and see if I still use this. And it just kind of became it's become really part of my daily routine. Like if I'm writing an email, I'll record a voice note and then I've developed and it's nice to see that everyone is like developing the same things in parallel. That's kind of a weird thing to say, when I started working on these prototypes on GitHub, which is where I just kind of share very freely and loosely ideas and first iterations on concepts. And for want of a better word, I called it like LLM post processing or clean up or basically a system prompt that after you get back the raw text from Whisper, you run it through a model and say, okay, this is crappy text like add sentence structure and, you know, fix it up. And now when I'm exploring the different tools that are out there that people have built, I see quite a number of projects have basically done the same thing. Lest that be misconstrued, I'm not saying for a millisecond that I inspired them. I'm sure this has been a thing that's been integrated into tools for a while, but it's the kind of thing that when you start using these tools every day, the need for it is almost instantly apparent because text that doesn't have any punctuation or paragraph spacing takes a long time to, you know, it takes so long to get it into a presentable email that again, moves speech tech into that before that inflection point where you're like, nah, it's just not worth it. It's like, it'll just be quicker to type this. So it's a big, it's a little touch that actually is a big deal. So I was on Whisper and I've been using Whisper and I kind of early on found a couple of tools. I couldn't find what I was looking for on Linux, which is basically just something that'll run-in the background. You'll give it an API key and it will just like transcribe with like a little key to start and stop the dictation. And the issues where I discovered that like most people involved in creating these projects were very much focused on local models, running Whisper locally because you can. And I tried that a bunch of times and just never got results that were as good as the cloud. And when I began looking at the cost of the speech to text APIs and what I was spending, I just thought there is it's actually, in my opinion, just one of the better deals in API spending in the cloud. Like, it's just not that expensive for very, very good models that are much more, you know, you're gonna be able to run the full model, the latest model versus whatever you can run on your average GPU unless you want to buy a crazy GPU. It doesn't really make sense to me. Privacy is another concern that I know is kind of like a very much a separate thing that people just don't want their voice data and their voice leaving their local environment maybe for regulatory reasons as well. But I'm not in that. I neither really care about people listening to my, grocery list, consisting of, reminding myself that I need to buy more beer, Cheetos, and hummus, which is kind of the three staples of my diet during periods of poor nutrition. But the kind of stuff that I transcribe, it's just not. It's not a privacy thing I'm that sort of sensitive about and I don't do anything so sensitive or secure that requires air capping. I looked at the pricing and especially the kind of older model mini. Some of them are very, very affordable and I did a calculation once with ChatGPT and I was like, okay, this is the API price for I can't remember whatever the model was. Let's say I just go at it like nonstop, which rarely happens. Probably, I would say on average I might dictate thirty to sixty minutes per day if I was probably summing up the emails, documents, outlines, which is a lot, but it's it's still a fairly modest amount. And I was like, well, some days I do go on like one or two days where I've been usually when I'm like kind of out of the house and just have something like I have nothing else to do. Like if I'm at a hospital, we have a newborn and you're waiting for like eight hours and hours for an appointment. And I would probably have listened to podcasts before becoming a speech fanatic. And I'm like, Oh, wait, let me just get down. Let me just get these ideas out of my head. And that's when I'll go on my speech binges. But those are like once every few months, like not frequently. But I said, okay, let's just say if I'm going to price out cloud STT. If I was like dedicated every second of every waking hour to transcribing for some odd reason, I mean I'd have to eat and use the toilet. There's only so many hours I'm awake for. So let's just say a maximum of forty five minutes in the hour, then I said, All right, let's just say fifty. Who knows? You're dictating on the toilet. We do it. So you could just do sixty, but whatever I did and every day, like you're going flat out seven days a week dictating nonstop. I was like, What's my monthly API bill going to be at this price? And it came out to like seventy or eighty bucks. And I was like, Well, that would be an extraordinary amount of dictation. And I would hope that there was some compelling reason worth more than seventy dollars that I embarked upon that project. So given that that's kind of the max point for me I said that's actually very very affordable. Now you're gonna if you want to spec out the costs and you want to do the post processing that I really do feel is valuable, that's going to cost some more as well. Unless you're using Gemini, which needless to say is a random person sitting in Jerusalem. I have no affiliation nor with Google nor Anthropic nor Gemini nor any major tech vendor for that matter. I like Gemini not so much as a everyday model. It's kind of underwhelmed in that respect, I would say. But for multimodal, I think it's got a lot to offer. And I think that the transcribing functionality whereby it can, process audio with a system prompt and both give you transcription that's cleaned up. That reduces two steps to one. And that for me is a very, very big deal. And I feel like even Google hasn't really sort of thought through how useful the that modality is and what kind of use cases you can achieve with it. Because I found in the course of this year just an endless list of really kind of system prompt stuff that I can say, okay, I've used it to capture context data for AI, which is literally I might speak for if I wanted to have a good bank of context data about who knows my childhood. More realistically, maybe my career goals, something that would just be like really boring to type out. So I'll just like sit in my car and record it for ten minutes. And that ten minutes you get a lot of information in. Emails, which is short text. Just there is a whole bunch. And all these workflows kind of require a little bit of treatment afterwards and different treatment. My context pipeline is kind of like just extract the bare essentials. You end up with me talking very loosely about sort of what I've done in my career, where I've worked, where I might like to work. And it goes, it condenses that down to very robotic language that is easy to chunk parse and maybe put into a vector database. Daniel has worked in technology. Daniel has been working in, know, stuff like that. That's not how you would speak, but I figure it's probably easier to parse for, after all, robots. So we've almost got to twenty minutes and this is actually a success because I wasted twenty minutes of my of the evening speaking into you in microphone and the levels were shot and was clipping and I said I can't really do an evaluation. I have to be fair. I have to give the models a chance to do their thing. What am I hoping to achieve in this? Okay, my fine tune was a dud as mentioned. Deepgram STT, I'm really, really hopeful that this prototype will work and it's a build in public open source so anyone is welcome to use it if I make anything good. But that was really exciting for me last night when after hours of trying my own prototype, seeing someone just made something that works like that, you you're not gonna have to build a custom conda environment and image. I have AMD GPU which makes things much more complicated. I didn't find it and I was about to give up and I said, All right, let me just give Deepgram's Linux thing a shot. And if this doesn't work, I'm just gonna go back to trying to vibe code something myself. And when I ran the script, I was using Cloud Code to do the installation process, it ran the script and, oh my gosh, it works just like that. The tricky thing for all those who wants to know all the nitty, ditty, nitty gritty details was that I don't think it was actually struggling with transcription, but pasting Weyland makes life very hard. And I think there was something not running at the right time. Anyway, Deepgram, I looked at how they actually handle that because it worked out of the box when other stuff didn't. And it was quite a clever little mechanism. And but more so than that, the accuracy was brilliant. Now what am I what am I doing here? This is gonna be a twenty minute audio sample. And I'm I think I've done one or two of these before, but I did it with short, snappy voice notes. This is kind of long form. This actually might be a better approximation for what's useful to me than voice memos. Like, I need to buy three liters of milk tomorrow and peter bread, which is probably how half my voice notes sound. Like if anyone were to find my phone they'd be like this is the most boring person in the world. Although actually there are some journaling thoughts as well, but it's a lot of content like that. And the probably for the evaluation, the most useful thing is slightly obscure tech, GitHub, Nucleano, hugging face, not so obscure that it's not gonna have a chance of knowing it, but hopefully sufficiently well known that the model should get it. I tried to do a little bit of speaking really fast and speaking very slowly. Would say in general, I've spoken, delivered this at a faster pace than I usually would owing to strong coffee flowing through my bloodstream. And the thing that I'm not gonna get in this benchmark is background noise, which in my first take that I had to get rid of, my wife came in with my son and for a good night kiss. And that actually would have been super helpful to get in because it was non diarized or if we had diarization. A female, I could say, I want the male voice and that wasn't intended for transcription. And we're not going to get background noise like people honking their horns, which is something I've done in my main data set where I am trying to go back to some of my voice notes, annotate them and run a benchmark. But this is going to be just a pure quick test. And as someone I'm working on a voice note idea. That's my sort of end motivation besides thinking it's an absolutely outstanding technology that's coming to viability. And really, I know this sounds cheesy, can actually have a very transformative effect. Voice technology has been life changing for folks living with disabilities. And I think there's something really nice about the fact that it can also benefit folks who are able-bodied and we can all in different ways make this tech as useful as possible regardless of the exact way that we're using it. And I think there's something very powerful in that, and it can be very cool. I see huge potential. What excites me about voice tech? A lot of things actually. Firstly, the fact that it's cheap and accurate, as I mentioned at the very start of this, and it's getting better and better with stuff like accent handling. I'm not sure my fine tune will actually ever come to fruition in the sense that I'll use it day to day as I imagine. I get like superb, flawless words error rates because I'm just kind of skeptical about local speech to text, as I mentioned. And I think the pace of innovation and improvement in the models, the main reasons for fine tuning from what I've seen have been people who are something that really blows blows my mind about ASR is the idea that it's inherently ailingual or multilingual, phonetic based. So as folks who use speak very obscure languages that there may be very there might be a paucity of training data or almost none at all, and therefore the accuracy is significantly reduced. Or folks in very critical environments, I know there are this is used extensively in medical transcription and dispatcher work as, you know the call centers who send out ambulances etc. Where accuracy is absolutely paramount and in the case of doctors radiologists they might be using very specialized vocab all the time. So those are kind of the main two things, and I'm not sure that really just for trying to make it better on a few random tech words with my slightly I mean, I have an accent, but, like, not, you know, an accent that a few other million people have ish. I'm not sure that my little fine tune is gonna actually like, the bump in word error reduction, if I ever actually figure out how to do it and get it up to the cloud, by the time we've done that, I suspect that the next generation of ASR will just be so good that it will kind of be, well, that would have been cool if it worked out, but I'll just use this instead. So that's gonna be it for today's episode of voice training data. Single, long shot evaluation. Who am I gonna compare? Whisper is always good as a benchmark, but I'm more interested in seeing Whisper head to head with two things really. One is Whisper variants. So you've got these projects like Faster Whisper. Distill Whisper. It's a bit confusing. There's a whole bunch of them. And the emerging ASRs, which are also a thing. My intention for this is I'm not sure I'm gonna have the time in any point in the foreseeable future to go back to this whole episode and create a proper source truth where I fix everything. Might do it if I can get one transcription that's sufficiently close to perfection. But what I would actually love to do on Hugging Face, I think would be a great probably how I might visualize this is having the audio waveform play and then have the transcript for each model below it and maybe even a, like, you know, to scale and maybe even a local one as well, like local whisper versus OpenAI API, etcetera. And I can then actually listen back to segments or anyone who wants to can listen back to segments of this recording and see where a particular model struggled and others didn't as well as the sort of headline finding of which had the best W E R but that would require the source of truth. Okay, that's it. I hope this was, I don't know, maybe useful for other folks interested in STT. You want to see I always think I've just said it as something I didn't intend to. STT, I said for those. Listen carefully, including hopefully the models themselves. This has been myself, Daniel Rosol. For more jumbled repositories about my roving interest in AI but particularly AgenTic, MCP and VoiceTech you can find me on GitHub. Hugging Face. Where else? DanielRosel dot com, which is my personal website, as well as this podcast whose name I sadly cannot remember. Until next time. Thanks for listening.
Hello and welcome to a audio data set consisting of one single episode of a non-existent podcast. Or I may append this to a podcast that I set up recently regarding my with my thoughts on speech tech and AI in particular, more AI in generative AI, I would say. But in any event, the purpose of this Voice recording is actually to create a lengthy voice sample for a quick evaluation, a back of the envelope evaluation, as they might say, for different speech attack models. And I'm doing this because I thought I had made a great breakthrough in my journey with speech tech, and that was succeeding in the elusive task of fine-tuning Whisper. Whisper is, and I'm going to just talk, I'm trying to mix up, I'm going to try a few different styles of speaking. I might whisper something at some point. As well. And I'll go back to speaking loud in, in different parts. I'm going to sound really like a crazy person because I'm also going to try to speak at different pitches and cadences in order to really try to put a speech attacks model through its paces, which is trying to make sense of is this guy just rambling on incoherently in one long sentence or are these just actually a series of step, standalone, step alone, standalone sentences? And how is it gonna handle step alone? That's not a word. What happens when you use speech to text and you use a fake word? And then you're like, wait, that's not actually, that word doesn't exist. How does AI handle that? And these and more are all the questions that I'm seeking to answer in this training data. Now, why was it trying to fine tune Whisper? And what is Whisper? As I said, I'm going to try to record this at a couple of different levels of technicality for folks who are, you know, in the normal world and not totally stuck down the rabbit hole of AI, which I have to say is a really wonderful rabbit hole to be down. It's a really interesting area and speech and voice tech is the aspect of it that I find actually the most, I'm not sure I would say the most interesting because there's just so much that is fascinating in AI. But the most that I find the most personally transformative in terms of the impact that it's had on my daily work life and productivity and how I sort of work. And I'm persevering hard with the task of trying to get a good solution working for Linux, which if anyone actually does listen to this, not just for the training data and for the actual content, this is sparked I had, besides the fine tune not working, well, that was the failure. Um, I used Claude code because one thinks these days that there is nothing short of solving, you know, the, the reason of life or something, that Claude and agentic AI can't do, which is not really the case. Uh, it does seem that way sometimes, but it fails a lot as well. And this is one of those, instances where last week I put together an hour of voice training data, basically speaking, just random things for 3 minutes. And it was actually kind of tedious because the texts were really weird. Some of them were it was like it was AI generated. I tried before to read Sherlock Holmes for an hour and I just couldn't. I was so bored after 10 minutes that I was like, okay, no, I'm just going to have to find something else to read. So I used a created with AI studio vibe coded a synthetic text generator. Which actually I thought was probably a better way of doing it because it would give me more short samples with more varied content. So I was like, okay, give me a voice note, like I'm recording an email, give me a short story to read, give me prose to read. So I came up with all these different things and they added a little timer to it so I could see how close I was to one hour. And I spent like an hour one afternoon or probably two hours by the time you you do retakes. And whatever, because you want to, it gave me a source of truth, which I'm not sure if that's the scientific way to approach this topic of gathering, training data, but I thought made sense. Um, I have a lot of audio data from recording voice notes, which I've also kind of used, been experimenting with using for a different purpose, slightly different annotating task types. It's more a text classification experiment or, Well, it's more than that actually. I'm working on a voice app. So it's a prototype, I guess, is really more accurate. But you can do that and you can work backwards. You're like, you listen back to a voice note and you painfully go through one of those transcribing, you know, where you start and stop and scrub around it and you fix the errors, but it's really, really boring to do that. So I thought it would be less tedious in the long term if I just recorded the source of truth. So it gave me these three minute snippets. I recorded them. It saved an MP3 and a TXT in the same folder, and I created an error with that data. So I was very hopeful, quietly, a little bit hopeful that I could actually fine tune Whisper. I want to fine tune Whisper because when I got into Voicetech last November, my wife was in the US and I was alone at home. And when crazy people like me do really wild things like use voice to tech technology. That was basically when I started doing it, I didn't feel like a crazy person speaking to myself. And my expectations weren't that high. I used speech tech now and again, tried it out. It was like, it'd be really cool if you could just, like, speak into your computer. And whatever I tried out that had Linux support was just. It was not good, basically. And this blew me away from the first go. I mean, it wasn't 100% accurate out of the box and it took work, but it was good enough that there was a solid foundation and it kind of passed that pivot point that it's actually worth doing this. You know, there's a point where it's so like the transcript is you don't have to get 100% accuracy for it to be worth your time for speech attacks to be a worthwhile addition to your productivity, but you do need to get above, let's say, I don't know, 85%. If it's 60% or 50%, you inevitably say, screw it, I'll just type it because you end up missing errors in the transcript and it becomes actually worse. You end up in a worse position than you started with. That's been my experience. So I was like, oh, this is actually really, really good now. How did that happen? And the answer is ASR whisper being open source and the transformer architecture. If you want to go back to the to the underpinnings, which really blows my mind and it's on my list. To read through that paper. All you need is attention as attentively as can be done with my limited brain because it's super, super high level stuff, super advanced stuff, I mean. But that, I think of all the things that are fascinating about the sudden rise in AI and the dramatic capabilities. I find it fascinating that a few people are like, hang on, you've got this thing that can speak to you, like a chatbot, an LLM, and then you've got image generation. Okay, so firstly, those two things on the surface have nothing in common. So like, how are they, how did that just happen all at the same time? And then when you extend that further, you're like, Suno, right? You can sing a song and AI will come up with and instrumental. And then you've got Whisper and you're like, wait a second, how did all this stuff, like, if it's all AI, what's like, there has to be some commonality. Otherwise, these are totally different technologies on the surface of it. And the Transformer architecture is, as far as I know, the answer. And I can't even say, can't even pretend that I really understand what the Transformer architecture means. In depth, but I have scanned it and as I said, I want to print it and really kind of think over it at some point. And I'll probably feel bad about myself, I think, because weren't those guys in their 20s? Like, that's crazy. I think I asked ChatGPT once who wrote that paper and how old were they when it was published in Arciv? And I was expecting, like, I don't know, What do you imagine? I personally imagine kind of like, you know, you have these breakthroughs during COVID and things like that where like these kind of really obscure scientists are like in their 50s and they've just kind of been laboring in labs and wearily in writing and publishing in kind of obscure academic publications. And they finally like hit a big or win a Nobel Prize and then their household names. So that was kind of what I had in mind. That was the mental image I'd formed of the birth of Arcsight. Like I wasn't expecting 20-somethings in San Francisco, though. I thought that was both very, very funny, very cool, and actually kind of inspiring. It's nice to think that people who, you know, just you might put them in the kind of milieu or bubble or world that you are in are credibly in through, you know, the series of connections that are coming up with such literally world changing innovations. So that was, I thought, anyway. That's that was cool. Okay, voice training data. How are we doing? We're about 10 minutes and I'm still talking about voice technology. So Whisper was brilliant and I was so excited that I was my first instinct was to like guess like, oh my gosh, I have to get like a really good microphone for this. So I didn't go on a spending spree because I said, I'm gonna have to just wait a month and see if I still use this. And It just kind of became, it's become really part of my daily routine. Like if I'm writing an email, I'll record a voice note. And then I've developed and it's nice to see that everyone is like developing the same things in parallel. Like that's my kind of a weird thing to say, but when I look, I kind of came, when I started working on this, these prototypes on GitHub, which is where I just kind of share very freely and loosely, ideas and first iterations on concepts. And for want of a better word, I called it like LLM post-processing or cleanup or basically a system prompt that after you get back the raw text from Whisper, you run it through a model and say, okay, this is crappy text, like add sentence structure and fix it up. And now when I'm exploring the different tools that are out there that people have built, I see quite a number of projects have basically done the same thing, lest that be misconstrued. I'm not saying for a millisecond that I inspired them. I'm sure this has been a thing that's been integrated into tools for a while, but it's the kind of thing that when you start using these tools every day, the need for it is almost instantly apparent because text that doesn't have any punctuation or Paragraph spacing takes a long time to, you know, it takes so long to get it into a presentable email that again, it's, it's, it, it moves speech tech into that before that inflection point where you're like, no, it's just not worth it. It's like, it's, it'll just be quicker to type this. So it's a big, it's a little touch that actually is a big deal. Uh, so I was on Whisper and I've been using Whisper and I kind of, early on found a couple of tools. I couldn't find what I was looking for on Linux, which is basically just something that'll run in the background. It'll give it an API key and it will just like transcribe with like a little key to start and stop the dictation. And the issues were I discovered that like most people involved in creating these projects were very much focused on local models, running Whisper locally because you can. And I tried that a bunch of times and just never got results that were as good as the cloud. And when I began looking at the cost of the speech to text APIs and what I was spending, I just thought there is, it's actually, in my opinion, just one of the better deals in API spending and in cloud. Like it's just not that expensive for very, very good models that are much more, you know, you're gonna be able to run the full model. The latest model versus whatever you can run on your average GPU, unless you want to buy a crazy GPU. It doesn't really make sense to me. Now, privacy is another concern that I know is kind of like a very much a separate thing that people just don't want their voice data and their voice leaving their local environment, maybe for regulatory reasons as well. But I'm not in that. I neither really care about people listening to my grocery list consisting of reminding myself that I need to buy more beer, Cheetos, and hummus, which is kind of the three staples of my diet during periods of poorer nutrition. But the kind of stuff that I transcribe, it's just not, it's not a privacy thing I'm that sort of sensitive about and I don't do anything so sensitive or secure that requires air gapping. So I looked at the pricing and especially the kind of older model mini Some of them are very, very affordable. And I did a back of the, I did a calculation once with ChatGPT and I was like, okay, this is the API price for I can't remember whatever the model was. Let's say I just go at it like nonstop, which it rarely happens. Probably, I would say on average, I might dictate 30 to 60 minutes per day if I was probably summing up the emails, documents, outlines, which is a lot, but it's still a fairly modest amount. And I was like, Some days I do go on like one or two days where I've been usually when I'm like kind of out of the house and just have something like I have nothing else to do. Like if I'm at a hospital, we have a newborn and you're waiting for like eight hours and hours for an appointment. And I would probably have listened to podcasts before becoming a speech fanatic. And I'm like, oh, wait, let me just get down. Let me just get these ideas out of my head. And that's when I'll go on my speech binges. But those are like once every few months, like not frequently. But I said, okay, let's just say if I'm gonna price out Cloud SCT, if I was like dedicated every second of every waking hour to transcribing for some odd reason, I mean, I'd have to like eat and use the toilet. Like, you know, there's only so many hours I'm awake for. So like, let's just say a maximum of like 40 hour, 45 minutes. In the hour. Then I said, all right, let's just say 50. Who knows? You're dictating on the toilet. We do it. So it could be. You could just do 60. But whatever I did. And every day, like, you're going flat out seven days a week dictating non-stop I was like, what's my monthly API bill gonna be at this price? And it came out to, like, 70 or 80 bucks. And I was like, well, that would be an extraordinary. Amount of dictation. And I would hope that there was some compelling reason more worth more than $70 that I embarked upon that project. So given that that's kind of the max point for me, I said that's actually very, very affordable. Now you're gonna, if you want to spec out the costs and you want to do the post-processing that I really do feel is valuable, that's gonna cost some more as well, unless you're using Gemini, which needless to say is a random person sitting in Jerusalem. I have no affiliation, nor with Google, nor anthropic, nor Gemini, nor any major tech vendor for that matter. I like Gemini not so much as a everyday model. It's kind of underwhelmed in that respect, I would say. But for multimodal, I think it's got a lot to offer. And I think that the transcribing functionality whereby it can process audio with a system prompt and both give you transcription that's cleaned up that reduces two steps to one. And that for me is a very, very big deal. And I feel like even Google has haven't really sort of thought through how useful the that modality is and what kind of use cases you can achieve with it. Because I found in the course of this year, just an endless list of really kind of system prompt system prompt stuff that I can say, okay, I've used it to capture context data for AI, which is literally I might speak for if I wanted to have a good bank of context data about who knows my childhood more realistically, maybe my career goals, something that would just be like really boring to type out. So I'll just like sit in my car and record it for 10 minutes. And that 10 minutes you get a lot of information in. Um, emails, which is short text, just there is a whole bunch and all these workflows kind of require a little bit of treatment afterwards and different treatment. My context pipeline is kind of like just extract the bare essentials. So you end up with me talking very loosely about sort of what I've done in my career, where I've worked, where I might like to work. And it goes, it condenses that down to very robotic language that is easy to chunk parse and maybe put into a vector database. Daniel has worked in technology. Daniel has been working in, you know, stuff like that. That's not how you would speak, but I figure it's probably easier to parse for, after all, robots. So we've almost got to 20 minutes and this is actually a success because I wasted 20 minutes of the evening speaking into a microphone and the levels were shot and it was clipping and I said, I can't really do an evaluation. I have to be fair. I have to give the models a chance to do their thing. What am I hoping to achieve in this? Okay, my fine tune was a dud as mentioned. DeepChrom ST, I'm really, really hopeful that this prototype will work and it's a build in public open source, so anyone is welcome to use it if I make anything good. But that was really exciting for me last night when after hours of trying my own prototype, seeing someone just made something that works like that, you know, you're not gonna have to build a custom conda environment and image. I have AMD GPU, which makes things much more complicated. I didn't find it. And I was about to give up and I said, all right, let me just give Deep Grams Linux thing a shot. And if this doesn't work, I'm just going to go back to trying to Vibe code something myself. And when I ran the script, I was using Claude code to do the installation process. It ran the script and oh my gosh, it works just like that. The tricky thing For all those who want to know all the nitty gritty details, was that I don't think it was actually struggling with transcription, but pasting Wayland makes life very hard. And I think there was something not running the right time. Anyway, Deepgram, I looked at how they actually handled that because it worked out of the box when other stuff didn't. And it was quite a clever little mechanism. And but more so than that, the accuracy was brilliant. Now, what am I doing here? This is going to be a 20 minute audio sample. And I think I've done one or two of these before, but I did it with short snappy voice notes. This is kind of long form. This actually might be a better approximation for what's useful to me than voice memos. Like, I need to buy three Bread, eaters of milk tomorrow and Peter bread, which is probably how like half my voice notes sound. Like if anyone were to, I don't know, like find my phone, they'd be like, this is the most boring person in the world. Although actually, there are some like kind of journaling thoughts as well, but it's a lot of content like that. And the probably for the evaluation, the most useful thing is slightly obscure tech, GitHub, NeocleNo, hugging face, Not so obscure that it's not going to have a chance of knowing it, but hopefully sufficiently well known that the model should get it. I tried to do a little bit of speaking really fast and speaking very slowly. I would say in general, I've spoken, delivered this at a faster pace than I usually would owing to strong coffee flowing through my bloodstream. And the thing that I'm not going to get in this benchmark is background noise, which in my first take that I had to get rid of, My wife came in with my son and for a goodnight kiss. And that actually would have been super helpful to get in because it was non diarized or if we had diarization, a female, I could say, I want the male voice and that wasn't intended for transcription. And we're not going to get background noise like people honking their horns, which is something I've done in my main data set where I am trying to go back to some of my voice notes. Annotate them and run a benchmark. But this is going to be just a pure quick test. And as someone, I'm working on a voice note idea. That's my sort of end motivation. Besides thinking it's an ask to the outstanding technology that's coming to viability. And really, I know this sounds cheesy, can actually have a very transformative effect. It's, you know, voice technology has been life changing for folks living with disabilities. And I think there's something really nice about the fact that it can also benefit, you know, folks who are able bodied and like we can all in different ways make this tech as useful as possible, regardless of the exact way that we're using it. And I think there's something very powerful in that and it can be very cool. I see huge potential. What excites me about Voicetech? A lot of things actually. Firstly, the fact that it's cheap and accurate, as I mentioned at the very start of this. And it's getting better and better with stuff like accent handling. I'm not sure my fine-tune will actually ever come to fruition in the sense that I'll use it day to day as I imagine. I get like superb flawless words error rates because I'm just kind of skeptical about Local speech to text, as I mentioned, and I think the pace of innovation and improvement in the models, the main reasons for fine tuning from what I've seen have been people who are something that really blows my mind about ASR is the idea that it's inherently a lingual or multilingual phonetic based. So as folks who use speak very obscure languages, that there might be a paucity of training data or almost none at all, and therefore the accuracy is significantly reduced. Or folks in very critical environments, I know this is used extensively in medical transcription and dispatcher work, the call centers who send out ambulances, et cetera, where accuracy is absolutely paramount. And in the case of doctors, radiologist, they might be using very specialized vocab all the time. So those are kind of the main two things that I'm not sure that really just for trying to make it better on a few random tech words with my slightly, I mean, I have an accent, but like not, you know, an accent that a few other million people have ish. I'm not sure that my little fine tune is gonna actually like the bump in word error reduction, if I ever actually figure out how to do it and get it up to the cloud. By the time we've done that, I suspect that the next generation of ASR will just be so good that it will kind of be, well, that would have been cool if it worked out, but I'll just use this instead. So that's going to be it for today's episode of voice training data. Single long shot evaluation. Who am I going to compare? Whisper is always good as a benchmark, but I'm more interested in seeing Whisper head to head with two things, really. One is Whisper variants. So you've got these projects like faster Distill Whisper, it's a bit confusing, there's a whole bunch of them. And the emerging ASRs, which are also a thing. My intention for this is I'm not sure I'm going to have the time in any point in the foreseeable future to go back through this whole episode and create a proper source truth, where I fix everything. Might do it if I can get one transcriptions that sufficiently close to perfection. But what I would actually love to do on Hugging Face, I think would be a great probably how I might visualize this is having the audio waveform play and then have the transcript for each model below it and maybe even a like, you know, to scale and maybe even a local one as well, like local whisper versus OpenAI API, et cetera. And, I can then actually listen back to segments or anyone who wants to can listen back to segments of this recording and see where a particular model struggled and others didn't, as well as the sort of headline finding of which had the best WER, but that would require the source of truth. Okay, that's it. I hope this was, I don't know, maybe useful for other folks interested in STT. You want to see that I always feel think I've just said as something I didn't intend to. STT, I said for those. Listen carefully, including hopefully the models themselves. This has been myself, Daniel Rosell. For more jumbled repositories about my roving interests in AI, but particularly agentic, MCP and Voicetech, you can find me on GitHub, huggingface.com, which is my personal website, as well as this podcast, whose name I sadly cannot remember. Until next time, thanks for listening.
Hello and welcome to a audio data set consisting of one single episode of a non-existent podcast. Or it, uh, I may append this to a podcast that I set up recently. Um, regarding my, uh, with my thoughts on speech, tech and AI in particular, more AI and generative AI, I would, uh, I would say, but in any event, the purpose of this, um, voice recording is actually to create a lengthy voice sample for a quick evaluation, a back of the envelope evaluation, as they might say, for different speech to text models. And I'm doing this because I, uh, I thought I'd made a great breakthrough in my journey with speech tech, and that was succeeding in the elusive task of fine tuning. Whisper, whisper is. And I'm going to just talk. I'm trying to mix up, uh, I'm going to try a few different styles of speaking. I might whisper something at some point as well, and I'll go back to speaking loud in, uh, in different parts. I'm going to sound really like a crazy person, because I'm also going to try to speak at different pitches and cadences in order to really try to put a speech to text model through its paces, which is trying to make sense of, is this guy just on incoherently in one long sentence, or are these just actually a series of step standalone, standalone, standalone sentences? And how is it going to handle step alone? That's not a word. Uh, what happens when you use speech to text and you use a fake word and then you're like, wait, that's not actually that word doesn't exist. How does AI handle that? And, uh, these and more are all the questions that I'm seeking to answer in this training data. Now, why did why was it trying to fine tune a whisper? And what is whisper? As I said, I'm gonna try to, uh, record this at a couple of different levels of technicality for folks who are, uh, you know, in the normal, uh, world and not totally stuck down the rabbit hole of AI, uh, which I have to say is a really wonderful, uh, rabbit hole to be to be down. Um, it's a really interesting area. And speech and voice tech is is the aspect of it that I find actually most. I'm not sure I would say the most interesting, because there's just so much that is fascinating in AI. Uh, but the most that I find the most personally transformative in terms of the impact that it's had on my daily work life and productivity and how I sort of work. And I'm persevering hard with the task of trying to guess a good solution working for Linux, which if anyone actually does listen to this, not just for the training data and for the actual content, uh, this is this is has sparked I had besides the fine tune not working. Well, that was the failure. Um, I used clod code because one thinks these days that there is nothing short of solving, you know, the, uh, the reason of life or something. Uh, that clod and agentic AI can't do, uh, which is not really the case. Uh, it does seem that way sometimes, but it fails a lot as well. And this is one of those, uh, instances where last week I put together an hour of voice training data, basically speaking just random things for three minutes. And, um, it was actually kind of tedious because the texts were really weird. Some of them were it was like it was AI generated. Um, I tried before to read Sherlock Holmes for an hour and I just couldn't. I was so bored, uh, after ten minutes that I was like, okay, now I'm just gonna have to find something else to read. So I used a created with AI studio vibe coded. A synthetic text generator. Um, which actually I thought was probably a better way of doing it because it would give me more short samples with more varied content. So I was like, okay, give me a voice note, like I'm recording an email, give me a short story to read, give me prose, um, to read. So I came up with all these different things, and I added a little timer to it so I could see how close I was to one hour. Um, and, uh, I spent like an hour one afternoon or probably two hours by the time you, um, you do retakes or whatever because you want to. It gave me a source of truth, which I'm not sure if that's the scientific way to approach this topic of gathering, uh, training data, but I thought it made sense. Um, I have a lot of audio data from recording voice notes, which I've also kind of used, um, been experimenting with using for a different purpose, slightly different annotating task types. It's more text classification experiment or uh, well, it's more than that, actually. I'm working on a voice app, so it's a prototype I guess is really more accurate. Um, but you can do that and you can work backwards. You're like, you listen back to a voice note and you painfully go through one of those transcribing, you know, where you start and stop and scrub around it and you fix the errors. But it's really, really boring to do that. So I thought it would be less tedious in the long term if I just recorded The Source of truth. So it gave me these three minute snippets. I recorded them and saved an MP3 and a txt in the same folder, and I created an hour of that data. Uh, so I was very hopeful, quietly, you know, a little bit hopeful that I would be able that I could actually fine tune, whisper. Um, I want to fine tune whisper because when I got into voice tech last November, my wife was in the US and I was alone at home. And you know, when crazy people like me do really wild things like use voice to tech, uh, technology. That was basically, um, when I started doing it, I didn't feel like a crazy person speaking to myself, and my expectations weren't that high. Uh, I used speech tech now and again. Um, tried it out. I was like, it'd be really cool if you could just, like, speak into your computer. And whatever I tried out that had Linux support was just. It was not good, basically. Um, and this blew me away from the first go. I mean, it wasn't 100% accurate out of the box and it took work, but it was good enough that there was a solid foundation and it kind of passed that, uh, pivot point that it's actually worth doing this. You know, there's a point where it's so like the transcript is you don't have to get 100% accuracy for it to be worth your time for speech to text to be a worthwhile addition to your productivity. But you do need to get above. Let's say, I don't know, 85%. If it's 60% or 50%, you inevitably say, screw it. I'll just type it because you end up missing errors in the transcript and it becomes actually worse. You end up in a worse position than you started with. And that's been my experience. So, um, I was like, oh, this is actually really, really good. Now how did that happen? And the answer is ASR whisper being open sourced and the transformer architecture, if you want to go back to the, um, to the underpinnings, which really blows my mind and it's on my list to read through that paper. Um, all you need is attention as attentively as can be done with my limited brain because it's super, super high level stuff. Um, super advanced stuff. I mean, uh, but that I think of all the things that are fascinating about the sudden rise in AI and the dramatic capabilities. I find it fascinating that few people are like, hang on, you've got this thing that can speak to you like a chatbot, an LLM, and then you've got image generation. Okay, so firstly, those two things on the surface have nothing in common. Um, so like how are they how did that just happen all at the same time. And then when you extend that further, um, you're like sooner, right? You can sing a song and AI will like, come up with an instrumental and then you've got whisper and you're like, wait a second, how did all this stuff, like, if it's all AI, what's like there has to be some commonality. Otherwise these are four. These are totally different technologies on the surface of it. And, uh, the transformer architecture is, as far as I know, the answer. And I can't even say can't even pretend that I really understand what the transformer architecture means in depth, but I have scanned it and as I said, I want to print it and really kind of think over it at some point, and I'll probably feel bad about myself, I think, because weren't those guys in their in their 20s like, that's crazy. I think I asked ChatGPT once who were the who wrote that paper and how old were they when it was published in arXiv? And I was expecting like, I don't know, what do you what do you imagine? I personally imagine kind of like, you know, you have these breakthroughs during Covid and things like that where like these kind of really obscure scientists who are like in their 50s and they've just kind of been laboring in labs and, uh, wearily and writing in publishing in kind of obscure academic publications. And they finally, like, hit a big or win a Nobel Prize and then their household household names. Uh, so that was kind of what I had in mind. That was the mental image I'd formed of the birth of arXiv. Like, I wasn't expecting 20 somethings in San Francisco, though I thought that was both very, very funny, very cool, and actually kind of inspiring. It's nice to think that people who, you know, just you might put them in the kind of milieu or bubble or world that you are in or credibly in, through, you know, a series of connections that are coming up with such literally world changing, um, innovations. Uh, so that was, I thought, anyway, that, that that was cool. Okay. Voice training data. How are we doing? We're about ten minutes, and I'm still talking about voice technology. Um, so whisper was brilliant, and I was so excited that I was. My first instinct was to, like, get like, oh, my gosh, I have to get, like, a really good microphone for this. So, um, I didn't go on a spending spree because I said, I'm gonna have to just wait a month and see if I still use this. And it just kind of became it's become really part of my daily routine. Like, if I'm writing an email, I'll record a voice note. And then I've developed and it's nice to see that everyone is like developing the same
things in parallel. Like, that's kind of a weird thing to say, but when I look, I kind of came when I started working on this, these prototypes on GitHub, which is where I just kind of share very freely and loosely, uh, ideas and, you know, first iterations on, on concepts, um, and for want of a better word, I called it like, uh, lm post-processing or cleanup or basically a system prompt that after you get back the raw text from whisper, you run it through a model and say, okay, this is crappy text, like add sentence structure and, you know, fix it up. And, um, now when I'm exploring the different tools that are out there that people have built, I see, uh, quite a number of projects have basically done the same thing, um, less that be misconstrued. I'm not saying for a millisecond that I inspired them. I'm sure this has been a thing that's been integrated into tools for a while, but it's it's the kind of thing that when you start using these tools every day, the need for it is almost instantly apparent, uh, because text that doesn't have any punctuation or paragraph spacing takes a long time to, you know, it takes so long to get it into a presentable email that again, it's it's it moves speech tech into that before that inflection point where you're like, no, it's just not worth it. It's like it'll just be quicker to type this. So it's a big it's a little touch. That actually is a big deal. Uh, so I was on whisper and I've been using whisper and I kind of early on found a couple of tools. I couldn't find what I was looking for on Linux, which is, um, basically just something that'll run in the background. You'll give it an API key and it will just transcribe. Um. with, like, a little key to start and stop the dictation. Uh, and the issues were I discovered that, like most people involved in creating these projects were very much focused on local models running whisper locally, because you can. And I tried that a bunch of times and just never got results that were as good as the cloud. And when I began looking at the cost of the speech to text APIs and what I was spending, I just thought there's it's actually, in my opinion, just one of the better deals in API spending and in cloud. Like it's just not that expensive for very, very good models that are much more, you know, you're going to be able to run the full model, the latest model versus whatever you can run on your average GPU. Unless you want to buy a crazy GPU. It doesn't really make sense to me. Now, privacy is another concern. Um, that I know is kind of like a very much a separate thing that people just don't want their voice, data, and their voice leaving their local environment, maybe for regulatory reasons as well. Um, but I'm not in that. Um, I'm neither really care about people listening to my, uh, grocery list consisting of, uh, reminding myself that I need to buy more beer, Cheetos and hummus, which is kind of the three, three staples of my diet. Um, during periods of poor nutrition. Uh, but the kind of stuff that I transcribe, it's just not it's not a, it's not a privacy thing and that sort of sensitive about and, uh, I don't do anything so, you know, sensitive or secure, that requires air gapping. So, um, I looked at the pricing and especially the kind of older models, mini, um, some of them are very, very affordable. And I did a back of the I did a calculation once with ChatGPT and I was like, okay, this is a, this is the API price for I can't remember whatever the model was. Uh, let's say I just go at it like nonstop, which it rarely happens. Probably. I would say on average, I might dictate 30 to 60 minutes per day if I was probably summing up the emails, documents, outlines, um, which is a lot, but it's it's still a fairly modest amount. And I was like, well, some days I do go on like 1 or 2 days where I've been. Usually when I'm like kind of out of the house and just have something like, I have nothing else to do. Like if I'm at a hospital with a newborn, uh, and you're waiting for like eight hours and hours for an appointment, and I would probably have listened to podcasts before becoming a speech fanatic. And I'm like, oh, wait, let me just get down. Let me just get these ideas out of my head. And that's when I'll go on my speech binges. But those are like once every few months, like not frequently. But I said, okay, let's just say if I'm gonna price out. Cloud asked if I was like, dedicated every second of every waking hour to transcribing for some odd reason. Um. I mean, it'd have to, like, eat and use the toilet and, like, you know, there's only so many hours I'm awake for. So, like, let's just say a maximum of, like, 40 hours, 45 minutes in the hour. Then I said, all right, let's just say 50. Who knows? You're dictating on the toilet. We do it. Uh, so it could be you could just do 60. But whatever I did, and every day, like, you're going flat out seven days a week dictating non-stop. I was like, what's my monthly API bill going to be at this price? And it came out to like 70 or 80 bucks. And I was like, well, that would be an extraordinary amount of dictation. And I would hope that there was some compelling reason, more worth more than $70, that I embarked upon that project. Uh, so given that that's kind of the max point for me, I said, that's actually very, very affordable. Um, now you're gonna if you want to spec out the costs and you want to do the post-processing that I really do feel is valuable. Um, that's going to cost some more as well, unless you're using Gemini, which, uh, needless to say, is a random person sitting in Jerusalem. Uh, I have no affiliation, nor with Google, nor anthropic, nor Gemini, nor any major tech vendor for that matter. Um, I like Gemini. Not so much as a everyday model. Um, it's kind of underwhelmed in that respect, I would say. But for multimodal, I think it's got a lot to offer. And I think that the transcribing functionality whereby it can, um, process audio with a system prompt and both give you transcription that's cleaned up, that reduces two steps to one. And that for me is a very, very big deal. And, uh, I feel like even Google has haven't really sort of thought through how useful the that modality is and what kind of use cases you can achieve with it. Because I found in the course of this year just an endless list of really kind of system prompt, system prompt stuff that I can say, okay, I've used it to capture context data for AI, which is literally I might speak for if I wanted to have a good bank of context data about, who knows, my childhood. Uh, more realistically, maybe my career goals, uh, something that would just be, like, really boring to type out. So I'll just, like, sit in my car and record it for ten minutes. And that ten minutes, you get a lot of information in, um, emails, which is short text. Um, just there is a whole bunch. And all these workflows kind of require a little bit of treatment afterwards and different treatment. My context pipeline is kind of like just extract the bare essentials. So you end up with me talking very loosely about sort of what I've done in my career, where I've worked, where I might like to work, and it goes it condenses that down to very robotic language that is easy to chunk, parse, and maybe put into a vector database. Daniel has worked in technology, Daniel is a has been working in, you know, stuff like that. That's not how you would speak. Um, but I figure it's probably easier to parse for, after all, robots. So we've almost got to 20 minutes. And this is actually a success because I wasted 20 minutes of my, uh, of the evening speaking into a microphone, and, uh, the levels were shot and, uh, it, uh, it was clipping and I said, I can't really do an evaluation. I have to be fair. I have to give the models a chance to do their thing. Uh, what am I hoping to achieve in this? Okay, my fine tune was a dud, as mentioned Deepgram SVT. I'm really, really hopeful that this prototype will work. And it's a built in public open source, so anyone is welcome to use it if I make anything good. Um, but that was really exciting for me last night when after hours of, um, trying my own prototype, seeing someone just made something that works like that. You know, you're not going to have to build a custom conda environment and image. I have AMD GPU, which makes things much more complicated. I didn't find it and I was about to give up and I said, all right, let me just give deep grams Linux thing a shot. And if this doesn't work, um, I'm just going to go back to trying to code something myself. And when I ran the script, I was using cloud code to do the installation process. It ran the script and oh my gosh, it works just like that. Uh, the tricky thing for all those who wants to know all the nitty gritty, nitty gritty details, um, was that I don't think it was actually struggling with transcription, but pasting Wayland makes life very hard, and I think there was something not running in the right time anyway. Deepgram I looked at how they actually handle that because it worked out of the box when other stuff didn't, and it was quite a clever little mechanism, and but more so than that, the accuracy was brilliant. Now, what am I doing here? This is going to be a 20 minute audio sample, and I'm I think I've done 1 or 2 of these before, but I did it with short, snappy voice notes. This is kind of long form. This actually might be a better approximation for what's useful to me than voice memos. Like I need to buy three liters of milk tomorrow, and pita bread, which is probably how like half my voice voice notes sound like if anyone were to, I don't know, like find my phone, they'd be like, this is the most boring person in the world. Although actually there are some like kind of, uh, journaling thoughts as well. But it's a lot of content like that. And the probably for the evaluation, the most useful thing is slightly obscure tech GitHub uh, hugging face not so
obscure that it's not going to have a chance of knowing it, but hopefully sufficiently well known that the model should get it. I tried to do a little bit of speaking really fast and speaking very slowly. I would say in general, I've spoken, delivered this at a faster pace than I usually would owing to strong coffee flowing through my bloodstream. And the thing that I'm not going to get in this benchmark is background noise, which in my first take that I had to get rid of, my wife came in with my son and for a good night kiss. And that actually would have been super helpful to get in because it was not diarised. Or if we had diarisation a female, I could say I want the male voice and that wasn't intended for transcription. Um, and we're not going to get background noise like people honking their horns, which is something I've done in my main data set where I am trying to go back to some of my voice notes, annotate them, and run a benchmark. But this is going to be just a pure quick test. And as someone I'm working on a voice note idea, that's my sort of end motivation. Besides thinking it's an absolutely outstanding technology that's coming to viability. And really, I know this sounds cheesy can actually have a very transformative effect. Um, it's, you know, voice technology has been life changing for, uh, folks living with, um, disabilities. And I think there's something really nice about the fact that it can also benefit, you know, folks who are able bodied and like, we can all in different ways, um, make this tech as useful as possible, regardless of the exact way that we're using it. Um, and I think there's something very powerful in that, and it can be very cool. Um, I see use potential. What excites me about voice tech? A lot of things, actually. Firstly, the fact that it's cheap and accurate, as I mentioned at the very start of this, um, and it's getting better and better with stuff like accent handling, um, I'm not sure my, my fine tune will actually ever come to fruition in the sense that I'll use it day to day, as I imagine I get like superb, flawless word error rates because I'm just kind of skeptical about local speech to texts, as I mentioned. And I think the pace of innovation and improvement in the models, the main reasons for fine tuning from what I've seen have been people who are something that really blows, blows my mind about ASR is the idea that it's inherently a lingual or multilingual phonetic based. So as folks who use speak very obscure languages that there may be there might be a paucity of training data or almost none at all, and therefore the accuracy is significantly reduced or folks in very critical environments. I know there are. This is used extensively in medical transcription and dispatcher work as, um, you know, the call centers who send out ambulances, etc., where accuracy is absolutely paramount. And in the case of doctors, radiologists, they might be using very specialized vocab all the time. So those are kind of the main two things. And I'm not sure that really just for trying to make it better on a few random tech words with my slightly. I mean, I have an accent, but like, not, you know, an accent that a few other million people have. Ish. I'm not sure that my little fine tune is going to actually like the bump in word error rate reduction. If I ever actually figure out how to do it and get it up to the cloud by the time I've done that. I suspect that the next generation of ASR will just be so good that it will kind of be. Ah, well, that would be cool if it worked out, but I'll just use this instead. So that's going to be it for today's episode of, uh, voice training data. Single long shot evaluation. Who am I going to compare? Whisper is always good as a benchmark, but I'm more interested in seeing Whisperer head to head with two things, really. One is whisper variance. So you've got these projects like faster Whisper, Still whisper. It's a bit confusing. There's a whole bunch of them and the emerging acers, which are also a thing. My intention for this is I'm not sure I'm going to have the time in any point in the foreseeable future to go back through this whole episode and create a proper source, truth or a fix. Everything might do it if I can get one transcription that sufficiently close to perfection. But what I would actually love to do on Hugging Face I think would be a great. Probably how I might visualize this is having the audio waveform play, and then have the transcript for each model below it, and maybe even a, um, like, you know, two scale and maybe even a local one as well, like local whisper versus open AI API, Etc. and, um, I can then actually listen back to segments or anyone who wants to can listen back to segments of this recording and see where a particular model struggled and others didn't, as well as the sort of headline finding of which had the best, uh, wer. But that would require the source of truth. Okay. That's it. Hope this was, I don't know, maybe useful for other folks interested in stuff you want to see. I always feel think I've just said something I didn't intend to say. I said for those, listen carefully. Including, hopefully, the models themselves. This has been myself, Daniel Rosehill, for more, um, jumbled repositories about my, uh, roving interest in AI, but particularly Agentic, MCP and voice tech. Uh, you can find me on GitHub. Hugging face. Where else? Daniel, which is my personal website, as well as this podcast whose name I sadly cannot remember. Until next time. Thanks for listening.
Hello and welcome to a audio dataset consisting of one single episode of a non-existent podcast. Or I may append this to a podcast that I set up recently regarding my, with my thoughts on speech tech and AI in particular, more AI and generative AI I would say. But in any event, the purpose of this voice recording is actually to create a lengthy voice sample for a quick evaluation, a back of the envelope evaluation as they might say for different speech attacks models. And I'm doing this because I thought I had made a great breakthrough in my journey with speech tech and that was succeeding in the elusive task of fine tuning whisper. And I'm going to just talk, I'm trying to mix up, I'm going to try a few different styles of speaking. I might whisper something at some points as well. And I'll go back to speaking loud in different parts, I'm going to sound really like a crazy person because I'm also going to try to speak at different pitches and cadences in order to really try to put a speech attacks model through its paces, which is trying to make sense of, is this guy just rambling on incoherently in one long sentence or are these just actually a series of step standalone, step alone, standalone sentences and how is it going to handle step alone? That's not a word. What happens when you use speech attacks and you use a fake word and then you're like, wait, that's not actually, that word doesn't exist. How does AI handle that? And these and more are all the questions that I'm seeking to answer in this training data. Now, why was I trying to fine tune whisper and what is whisper? As I said, I'm going to try to record this at a couple of different levels of technicality for folks who are, you know, in the normal world and not totally stuck down the rabbit hole of AI, which I have to say is a really wonderful rabbit hole to be down. It's a really interesting area and speech and voice tech is the aspect of it that I find actually most, I'm not sure I would say the most interesting because there's just so much that is fascinating in AI, but the most that I find the most personally transformative in terms of the impact that it's had on my daily work life and productivity and how I sort of work and I'm persevering hard with the task of trying to get a good solution working for Linux, which if anyone actually does listen to this, not just for the training data and for the actual content, this is sparked. I had, besides the fine tune not working, well, that was the failure. I used flawed code because one thinks these days that there is nothing short of solving, you know, the reason of life or something that flawed and agentic AI can't do, which is not really the case. It does seem that way sometimes, but it fails a lot as well. And this is one of those instances where last week I put together an hour of voice training data, basically speaking, just random things for three minutes. And it was actually kind of tedious because the texts were really weird. Some of them were, it was like it was AI generated. I tried before to read Sherlock Holmes for an hour and I just couldn't. I was so bored after 10 minutes that I was like, okay, no, I'm just going to have to find something else to read. So I used a, created with AI Studio, vibe coded a synthetic text generator, which actually I thought was probably a better way of doing it because it would give me more short samples with more varied content. So I was like, okay, give me a voice note, like I'm recording an email, give me a short story to read, give me prose to read. So it came up with all these different things and they added a little timer to it so I could see how close I was to one hour. And I spent like an hour, one afternoon or probably two hours by the time you, you do retakes and whatever, because you want to, it gave me a source of truth, which I'm not sure if that's the scientific way to approach this topic of gathering training data, but I thought made sense. Um, I have a lot of audio data from recording voice notes, which I've also kind of used, uh, been experimenting with using for a different purpose, slightly different annotating task types. It's more text classification experiments or, uh, well, it's more than that actually working on a voice app. So it's a prototype, I guess, is really more accurate. Um, but you can do that and you can work backwards. You're like, you listen back to a voice note and you painfully go through one of those transcribing, you know, where you start and stop and scrub around it and you fix the errors, but it's really, really boring to do that. So I thought it would be less tedious in the longterm if I just recorded the source of truth. So it gave me these three minute snippets. I recorded them and saved an MP3 and a TXT in the same folder and I created an error that data. Uh, so I was very hopeful, quietly, you know, a little bit hopeful that I would be able that I could actually fine tune Whisper. Um, I want to fine tune Whisper because when I got into voice tech, uh, last November, uh, my wife was in the U S and I was alone at home. And, uh, you know, when crazy people like me do really wild things, like use voice to tech, uh, technology, that was basically, um, when I started doing it, I didn't feel like a crazy person speaking to myself. And my expectations weren't that high. Uh, I used speech tech now and again, um, tried it out. I was like, it'd be really cool if you could just like speak into your computer and whatever I tried out that had Linux support was just, it was not good basically. Um, and this blew me away from the first go. I mean, it wasn't 100% accurate out of the box and it took work, but it was good enough that there was a solid foundation and it kind of passed that, uh, pivot point that it's actually worth doing this. You know, there's a point where it's so like the transcript is you don't have to get 100% accuracy for it to be worth your time for a speech attacks to be a worthwhile addition to your productivity, but you do need to get above, let's say, I don't know, 85%. If it's 60% or 50%, you inevitably say, screw it. I'll just type it because you end up missing, um, errors in the transcript and it becomes actually worse. You end up in a worse position than you started with it. That's been my experience. So, um, I was like, oh, this is actually really, really good. Now, how did that happen? And the answer is ASR, Whisper being open sourced and the transformer architecture. If you want to go back to the, uh, to the underpinnings, which really blows my mind and it's on my list to read through that paper. All you need is attention as attentively as can be done with my limited brain, because it's super, super high level stuff. Um, super advanced stuff, I mean, uh, but that I think of all the things that are fascinating about the sudden rise and AI and the dramatic capabilities, I find it fascinating. A few people are like, hang on, you've got this thing that can speak to you like a chat bot, an LLM, and then you've got image generation. Okay. So firstly, those two things on the surface have nothing in common. Um, so like, how are they, how did that just happen all at the same time? And then when you extend that further, um, you're like Suno, right? You can sing a song and AI will like come up with an instrumental and then you've got Whisper and you're like, wait a second, how did all this stuff, like if it's all AI, what's like, there has to be some commonality. Otherwise these are four, these are totally different technologies on the surface of it. And, uh, the transformer architecture is as far as I know, the answer. And I can't even say, can't even pretend that I really understand what the transformer architecture means in depth, but I have scanned this. And as I said, I want to print it and really kind of think over it at some point. And I'll probably feel bad about myself, I think, because weren't those guys in their twenties? Like, that's crazy. I think I asked Chad TPT once who were the, who wrote that paper and how old were they when it was published in ARCSYV. And I was expecting like, I don't know, what do you, what do you imagine? I personally imagine kind of like, you know, you have these breakthroughs during COVID and things like that, where like these kind of really obscure scientists who are like in their fifties and they've just kind of been laboring in labs and, uh, wearily and writing and publishing and kind of obscure academic publications. And they finally like hit a big or win a Nobel prize. And then their household, household names. Uh, so that was kind of what I had in mind. That was the mental image I'd formed of the birth of ARCSYV. Like I wasn't expecting 20 somethings in San Francisco though. I thought that was both very, very funny, very cool. And actually kind of inspiring. It's nice to think that people who, you know, just, you might put them in the kind of milieu or bubble or world that you are in or credibly in through, you know, the series of connections that are coming up with such literally world-changing, um, innovations. Uh, so that was, I thought, anyway, that's, that, that was cool. Okay. Voice training data. How are we doing? We're about 10 minutes and I'm still talking about voice technology. Um, so Whisper was brilliant and I was so excited that I was, my first instinct was to like guess it's like, Oh my gosh, I have to get like a really good microphone for this. So, um, I didn't go on a spending spree because I said, I'm going to have to just wait a month and see if I still use this. And it just kind of became, it's become really part of my daily routine. Like if I'm writing an email, I'll record a voice note and then I've developed. And it's nice to see that everyone is like developing the same things in parallel. Like that's kind of a weird thing to say, but when I look, I, I kind of came when I started working on this, uh, these prototypes on GitHub, which is where I just kind of share very freely and loosely, uh, ideas and, you know, first iterations on, on concepts. Um, and for want of a better word, I called it like, uh, LLM post-processing or cleanup, or basically a system prompt that after you get back the raw text from Whisper, you run it through a model and say, okay, this is crappy, uh, text, like add sentence structure and, you know, fix it up. And, um, now when I'm exploring the different tools that are out there that people have built, I see, uh, quite a number of projects have basically, you know, done the same thing. Um, lest that be misconstrued. I'm not saying for a millisecond that I inspired them. I'm sure this has been a, a thing that's been, uh, integrated into tools for a while, but it's, it's the kind of thing that when you start using these tools every day, the need for it is almost instantly apparent, uh, because text that doesn't have any punctuation or paragraph spacing takes a long time to, you know, it takes so long to get it into a presentable email that again, it's, it's, it, it moves speech tech into that before that inflection point where you're like, nah, it's just not worth it. It's like, it'll just be quicker to type this. So it's a big, it's a little touch that actually is a big deal. Uh, so I was on Whisper and I've been using Whisper and I kind of early on found a couple of tools. I couldn't find what I was looking for on Linux, which is, um, basically just something that'll run in the background. You'll give it an API key and it will just like transcribe, um, with like a little key to start and stop the dictation. Uh, and the issues were, I discovered that like most people involved in creating these projects were very much focused on local models, uh, running, running Whisper locally because you can, and I tried that a bunch of times and just never got results that were as good as the cloud. And when I began looking at the cost of the speech to text APIs and what I was spending, I just thought there's, it's actually, in my opinion, just one of the better deals in API spending and in cloud. Like it's just not that expensive for very, very good models that are much more, you know, you're going to be able to run the full model, the latest model versus whatever you can run on your average GPU, unless you want to buy a crazy GPU. It doesn't really make sense to me now. Privacy is another concern, um, that I know is kind of like a very much a separate thing that people just don't want their voice data and their voice leaving their local environment, maybe for regulatory reasons as well. Um, but I'm not in that, um, I'm neither really care about people listening to my, uh, grocery list, uh, consisting of, uh, reminding myself that I need to buy more beer, Cheetos and hummus, which is kind of the three, three staples of my diet, um, during, uh, periods of poor nutrition. Uh, but the kind of stuff that I transcribe, it's just not, it's not a, it's not a privacy thing I'm that sort of sensitive about. And, uh, I don't do anything so, you know, sensitive or secure that requires air gapping. So, um, I looked at the pricing and especially the kind of older models, mini, um, some of them are very, very affordable. And I did a back of the, I did a calculation once with chat GBT and I was like, okay, this is the, this is the API price for, I can't remember whatever the model was. Uh, let's say I just go at it like nonstop, which it rarely happens. Probably I would say on average, I might dictate 30 to 60 minutes per day. If I was probably summing up the emails, uh, documents, outlines, um, which is a lot, but it's still a fairly modest amount. And I was like, well, some days I do go on like one or two days where I've been usually when I'm like kind of out of the house and just have something like I've nothing else to do. Like if I'm at a hospital, we have a newborn, uh, and you're waiting for like eight hours and hours for an appointment. And I would probably have listened to podcasts before becoming a speech fanatic. And I'm like, oh wait, let me just get down. Let me just get these ideas out of my head. And that's when I'll go on my speech binges. But those are like once every few months, like not frequently. But I said, okay, let's just say if I'm going to price out cloudSTT, if I was like dedicated every second of every waking hour to transcribing for some odd reason, um, I mean, it'd have to like eat and use the toilet. Like, you know, there's only so many hours I'm awake for. So like, let's just say a maximum of like 40 hours, 45 minutes in the hours. And I said, all right, let's just say 50. Who knows? You're dictating on the toilet. We do it. So you could just do 60, but whatever I did and every day, like you're going flat out seven days a week, dictating nonstop. I was like, what's my monthly API bill going to be at this price? And it came out to like 70 or 80 bucks. And I was like, well, that would be an extraordinary amount of dictation. And I would hope that there was some compelling reason worth more than $70 that I embarked upon that project. So given that that's kind of the max point for me, I said, that's actually very, very affordable. Now you're going to, if you want to spec out the costs and you want to do the post-processing that I really do feel is valuable, that's going to cost them more as well. Unless you're using Gemini, which needless to say is a random person sitting in Jerusalem. I have no affiliation, nor with Google, nor Anthropic, nor Gemini, nor any major tech vendor for that matter. I like Gemini, not so much as a everyday model. It's kind of underwhelmed in that respect, I would say. But for multimodal, I think it's got a lot to offer. And I think that the transcribing functionality whereby it can process audio with a system prompt and both give you transcription, that's cleaned up. That reduces two steps to one. And that for me is a very, very big deal. And I feel like even Google hasn't really sort of thought through how useful that modality is and what kind of use cases you can achieve with it. Because I found in the course of this year, just an endless list of really kind of system prompt stuff that I can say, OK, I've used it to capture context data for AI, which is literally I might speak for if I wanted to have a good bank of context data about, who knows, my childhood. More realistically, maybe my career goals, something that would just be like really boring to type out. So I'll just like sit in my car and record it for 10 minutes and that 10 minutes, you get a lot of information in emails, which is short text. Just there is a whole bunch. And all these workflows kind of require a little bit of treatment afterwards and different treatment. My context pipeline is kind of like just extract the bare essentials. So you end up with me talking very loosely about sort of what I've done in my career, where I've worked, where I might like to work. And it goes, it condenses that down to very robotic language that is easy to chunk, parse and maybe put into a vector database. Daniel has worked in technology. Daniel is a has been working in, you know, stuff like that. That's not how you would speak. But I figure it's probably easier to parse for, after all, robots. So we've almost got to 20 minutes, and this is actually a success because I wasted 20 minutes of my of the evening speaking into a microphone and the levels were shot and it was clipping. And I said, I can't really do an evaluation. I have to be fair. I have to give the models a chance to do their thing. What am I hoping to achieve in this? OK, my fine tune was a dud, as mentioned, DeepGram SDT. I'm really, really hopeful that this prototype will work and it's a built in public open source. So anyone is welcome to use it if I make anything good. But that was really exciting for me last night when after hours of trying my own prototype, seeing someone just made something that works like that. You know, you're not going to have to build a custom Conda environment, an image. I have AMD GPU, which makes things much more complicated. I didn't find it. And I was about to give up and I said, all right, let me just give DeepGram's Linux thing a shot. And if it doesn't work, I'm just going to go back to trying to code something myself. And when I ran the script, I was using cloud code to do the installation process. It ran the script and oh, my gosh, it works just like that. The tricky thing. For all those wants to know all the nitty gritty details was that I don't think it was actually struggling with transcription, but pasting. Wayland makes life very hard. And I think there was something not running at the right time. Anyway, DeepGram, I looked at how they actually handle that because it worked out of the box when other stuff didn't. And it was quite a clever little mechanism. And but more so than that, the accuracy was brilliant. Now, what am I doing here? This is going to be a 20 minute audio sample. And I'm I think I've done one or two of these before, but I did it with short, snappy voice notes. This is kind of long form. This actually might be a better approximation for what's useful to me than voice memos. Like, I need to buy three liters of milk tomorrow and pita bread, which is probably how like half my voice note voice note sound like if anyone were to, I don't know, like find my phone, they'd be like, this is the most boring person in the world. Although, actually, there are some like kind of journaling thoughts as well, but it's a lot of content like that. And the probably for the evaluation, the most useful thing is slightly obscure tech. GitHub, Nuclino, Hugging Face, not so obscure that it's not going to have a chance of knowing it, but hopefully sufficiently well known that the model should get it. I tried to do a little bit of speaking really fast and speaking very slowly. I would say in general, I've spoken, delivered this at a faster pace than I usually would owing to strong coffee flowing through my bloodstream. And the thing that I'm not going to get in this benchmark is background noise, which in my first take that I had to get rid of, my wife came in with my son and for a good night kiss. And that actually would have been super helpful to get in because it was non-diarized or if we had diarization, a female, I could say, I want the male voice and that wasn't intended for transcription. And we're not going to get background noise like people honking their horns, which is something I've done in my main data set where I am trying to go back to some of my voice notes, annotate them and run a benchmark. But this is going to be just a pure quick test. And as someone working on a voice note idea, that's my sort of end motivation besides thinking it's an absolutely outstanding technology that's coming to viability. And really, I know this sounds cheesy, can actually have a very transformative effect. It's, you know, voice technology has been life changing for folks living with disabilities. And I think there's something really nice about the fact that it can also benefit, you know, folks who are able-bodied and like we can all in different ways make this tech as useful as possible, regardless of the exact way that we're using it. And I think there's something very powerful in that and it can be very cool. I see huge potential. What excites me about voice tech? A lot of things, actually. Firstly, the fact that it's cheap and accurate, as I mentioned at the very start of this. And it's getting better and better with stuff like accent handling. I'm not sure my fine tune will actually ever come to fruition in the sense that I'll use it day to day, as I imagine. I get like superb, flawless words, error rates, because I'm just kind of skeptical about local speech to text, as I mentioned. And I think the pace of innovation and improvement in the models, the main reasons for fine tuning from what I've seen have been people who are something that really blows my mind about ASR is the idea that it's inherently alingual or multilingual, phonetic based. So as folks who speak very obscure languages, that there might be a paucity of training data or almost none at all, and therefore the accuracy is significantly reduced. Or folks in very critical environments. I know this is used extensively in medical transcription and dispatcher work as, you know, the call centers who send out ambulances, et cetera, where accuracy is absolutely paramount. And in the case of doctors, radiologists, they might be using very specialized vocab all the time. So those are kind of the main two things. And I'm not sure that really just for trying to make it better on a few random tech words with my slightly, I mean, I have an accent, but like not, you know, an accent that a few other million people have it. I'm not sure that my little fine tune is going to actually, like the bump in word error reduction, if I ever actually figure out how to do it and get it up to the cloud, by the time I've done that, I suspect that the next generation of ASR will just be so good that it will kind of be, well, that would have been cool if it worked out, but I'll just use this instead. So that's going to be it for today's episode of voice training data. Single long shot evaluation. Who am I going to compare? Whisper is always good as a benchmark, but I'm more interested in seeing Whisper head to head with two things really. One is Whisper variants. So you've got these projects like Faster Whisper, Distilled Whisper. It's a bit confusing. There's a whole bunch of them. And the emerging ASRs, which are also a thing. My intention for this is I'm not sure I'm going to have the time in any point in the foreseeable future to go back through this whole episode and create a proper source truth where I fix everything. I might do it if I can get one transcription that's sufficiently close to perfection. But what I would actually love to do on Hugging Face, I think would be a great, probably how I might visualize this, is having the audio waveform play and then have the transcript for each model below it. And maybe even a, like, you know, two scale and maybe even a local one as well. Like local Whisper versus OpenAI API, etc. And I can then actually listen back to segments or anyone who wants to can listen back to segments of this recording and see where a particular model struggled and others didn't, as well as the sort of headline finding of which had the best WER. But that would require the source of truth. OK, that's it. I hope this was, I don't know, maybe useful for other folks interested in STT. You want to see that, I always feel, think I've just said something I didn't intend to. STT, I said for those. Listen carefully, including hopefully the models themselves. This has been myself, Daniel Rosehill. For more jumbled repositories about my roving interest in AI, but particularly agentic, MCP and voice tech, you can find me on GitHub, Hugging Face. Where else? Danielrosehill.com, which is my personal website, as well as this podcast, whose name I sadly cannot remember. Until next time, thanks for listening.
Hello and welcome to a audio data set consisting of one single episode of a non-existent podcast. Or, I may append this to a podcast that I set up recently regarding my with my thoughts on speech tech and AI in particular. More AI in generative AI I would say. But in any event, the purpose of this voice recording is actually to create a lengthy voice sample for a quick evaluation, a back of the envelope evaluation as they might say for different speech attacks models. And I'm doing this because I I thought I'd made a great breakthrough in my journey with speech tech. And that was succeeding in the elusive task of fine-tuning whisper. Whisper is, and I'm going to just talk, I'm trying to mix up, I'm going to try a few different styles of speaking. I might whisper something at some points as well. And I'll go back to speaking loud in different parts. I'm going to send really like a crazy person because I'm also going to try to speak at different pitches and cadences in order to really try to push a speech attacks model through its paces, which is trying to make sense of is this guy just rambling on and coherently in one long sentence or are these just actually series of step standalone standalone sentences? And how is it going to handle step alone? That's not a word. What happens when you use speech attacks and you use a fake word and then you're like, wait, that's not actually, that word doesn't exist. How does AI handle that? And these and more are all the questions that I'm seeking to answer in this training data. Now, why did why was it trying to find China whisper? And what is whisper? As I said, I'm going to try to record this at a couple of different levels of technicality for folks who are in the normal world and not totally stuck down the rabbit hole of AI. What you have to say is a really wonderful rabbit hole to be to be done. It's a really interesting area and speech and voice tech is the aspect of it that I find actually most I'm not sure I would say the most interesting because there's just so much that is fascinating in AI. But the most that I find the most personally transformative in terms of the impact that it's had on my daily work life and productivity and how I sort of work. And I am persevering hard with the task of training, I guess, a good solution working for Linux. Would you have anyone actually does listen to this not just for the training data and for the actual content? This is this is sparked. I had, besides the fine tune not working, well that was the failure. I used plot code because one thing these days that there is nothing short of solving, you know, the reason of life or something that's flawed and agentically I can't do, which is not really the case. It does seem that way sometimes but it fails a lot as well. And this is one of those instances where last week I put together an hour of voice training data, basically speaking just random things for three minutes and
it was actually kind of tedious because the text were really weird. Some of them were it was like, it was AI generated. I tried before to reach Sherlock Holmes for an hour and I just couldn't, I was so bored after 10 minutes that I was like, okay, knowing just gonna have to find something else to read. So I used a created with AI Studio, a vibe code is a synthetic text generator, which actually I thought was probably a better way of doing it because it would give me more short samples with more varied content. So I was like, okay, give me a voice note. Like I'm recording an email, give me a short story to read, give me pros to read. So I came up with all these different things and they added a little timer to it so I could see how close I was to one hour and I spent like an hour one afternoon or probably two hours by the time you do retakes and whatever because you want to, it gave me a source of truth which I'm not sure if that's the scientific way to approach this. Topic of gathering training data but I thought made sense. I have a lot of audio data from recording voice notes which I've also kind of used being experimenting with using for a different purpose. It's slightly different annotating task types. It's more text classification experiment or well it's more than that actually working on a voice app so it's a prototype I guess is really more accurate.
But you can do that and you can work backwards. You listen back to a voice note and you painfully go through one of those transcribing where you start and stop and scrub around it and you fix the errors but it's really really pouring to do that. So I thought it would be last tedious in the long term if I just recorded this source of truth so it gave me these three minute snippets. I recorded them it saved in MP3 and the TXT in the same folder and I created an error that data. So I was very hopeful quite a little bit hopeful that I would be able that I could actually fine tune whisper. I want to fine tune whisper because when I got into voice tech last November my wife was in the US and I was alone at home and when crazy people like me do really wild things like use voice to tech technology that was basically when I started doing it I didn't feel like a crazy person speaking to myself and my expectations weren't that high. I used speech tech now and again try it out. It's like it'd be really cool if you could just like speak into your computer and whatever I tried out that had Linux support was just it was not good basically and this blew me away from the first go. I mean it wasn't 100% accurate out of the box and it took work but it was good enough that there was a solid foundation and it kind of passed that pivot point that it's actually worth doing this. There's a point where it's so like the transcript is you don't have to get 100% accuracy for it to be worth your time for it's speech attacks to be a worthwhile addition to your productivity but you do need to get above let's say I don't know 85% if it's 60% or 50% you inevitably say screw it I'll just type it because you end up missing errors in the transcript and it becomes actually worse you end up in a worse position than you started with it that's been my experience. So I was like oh this is actually really really good now how did that happen? The answer is ASR with per being open-sourced and the transformer architecture if you want to go back to the to the underpinnings which really blows my mind and it's on my list to read through that paper all you need is attention as attentively as can be done with my limited brain because it's super high-level stuff super advanced stuff I mean but that I think of all the things that are fascinating about the sudden rise and AI and the dramatic capabilities I find it fascinating a few people are like hang on you've got this thing that can speak to you like a chatbot and LLM then you've got image generation okay so firstly those two things on the surface have nothing in common so like how are they how did that just happen all at the same time and then when you extend that further you're like sooner right you can sing a song an AI will like come up with an instrumental and then you've got whisper and you're like wait a second how did all this stuff like if it's all AI what's like there has to be some commonality otherwise these are four these totally different technologies on the surface of it and the transformer architecture is as far as I know the answer and I can't even say I can't even pretend that I really understand what the transformer architecture means in depth but I have scandis and as I said I want to print it and really kind of think over it's at some point and I'll probably feel bad about myself I think because weren't those guys in their in their 20s like that's crazy I think I asked chat Gbt once who were the who wrote that paper and how old were they when it was published in ARC and I was expecting like I don't know what do you what do you imagine I personally imagine kind of like you know you have these breakthroughs during COVID and things like that were like these kind of really obscure scientists who are like in their 50s and they've just kind of been laboring labs and we're really in writing and publishing and kind of obscure academic publications and they finally like hit a big or win a Nobel Prize and then their household household names so I that was kind of what I had in mind that was the mental image I'd formed of the birth of ARC like I wasn't expecting 20 somethings in San Francisco though I thought that was both very very funny very cool and actually kind of inspiring it's nice to think that people who you know just you might put them in the kind of milieu or bubble or world that you are in are credibly in through you know the series of connections that are coming up with such literally world changing innovations so that was I thought anyway that that that was cool okay voice training data how were we doing we're about 10 minutes and I'm still talking about voice technology so whisper was brilliant and I was so excited that I was my first instinct was to like guess it's like oh my gosh I have to get like a really good microphone for this so I didn't go on a spending spree because I said I'm gonna have to just wait a month and see if I still use this and it just kind of became it's become really part of my daily routine like if I'm writing an email I'll record a voice note and then I've developed and it's nice to see that everyone is like developing the same things in parallel like that's my kind of a weird thing to say but when I look I kind of came when I started working on this these prototypes on GitHub which is where I just kind of share very freely and loosely ideas and you know first iterations on concepts and for one of a better word I called it like LLM post processing or cleanup or basically a system prompt that after you get back the raw text from whisper you run it through model and say okay this is crappy text like add sentence structure and you know fix it up and now when I'm exploring the different tools that are out there the people have built I see quite a number of projects have basically you know done the same thing less that be misconstrued I'm not saying for a millisecond that I inspired them I'm sure this has been a thing that's been integrated into tools for a while but it's it's the kind of thing that when you start using these tools every day the need for it is almost instantly apparent because text that doesn't have any punctuation or progress basing takes a long time to you know it takes so long to get it into a presentable email that again it's it moves speech tech into that before that inflection point where you're like nah she's not worth it it's like it'll just be quicker to type this so it's it's a big it's a little touch that actually is a big deal so I was on whisper and I've been using whisper and I kind of early on find a couple of tools I couldn't find what I was looking for on Linux which is basically just something that'll run in the background you'll give it an API key and it'll just like transcribe
with like a little key to start and start the dictation and the issues where I discovered that like most people involved in creating these projects were very much focused on local models and running whisper locally because you can and I tried that a bunch of times and just never got results that were as good as the cloud and when I began looking at the cost of the speech text API is what I was spending I just thought there is it's actually in my opinion just one of the better deals in API spending and in cloud like it's just not that expensive for very very good models that are much more you know you're going to be able to run the full model the latest model versus whatever you can run on your average GPU unless you want to buy a crazy GPU it doesn't really make sense to me and I privacy is another concern that I know is kind of like a very much a separate thing that people just don't want their voice data and their voice leaving their local environment maybe for regulatory reasons as well but I'm not in that I'm neither really care about people listening to my grocery list consisting of reminding myself that I need to buy more beer cheetos and hummus which is kind of the three three staples of my diet during periods of poor nutrition but the kind of stuff that I transcribe most it's just not it's not a it's not a privacy thing I'm that sort of sensitive about and I don't do anything so you know sensitive or secure that requires airgapping so I looked at the pricing and especially the kind of older models mini some of them are very very affordable and I did it back to the I did a calculation once with Chachi BT and I was like okay this is the this is the API price for I can't remember whatever the model was let's say I just go at it like nonstop which rarely happens probably I would say an average I might dictate 30 to 60 minutes per day if I was probably summing up the emails documents outlines which is a lot but it's it's still a fairly modest amount and I was like well some days I do go on like one or two days where I've been usually when I'm like kind of out of the house and just have something like I've nothing else to do like if I'm at a hospital we have a newborn and you're waiting for like eight hours and hours for an appointment and I would probably have listened to podcasts before becoming a speech fanatic and I'm like oh wait let me just get down let me just get these ideas out of my head and that's when I'll go on my speech spinges but those are like ones every few months like not frequently but I said okay let's just say if I'm gonna price out cloud sgt if I was like dedicated every second of every waking hour to transcribing for some odd reason I mean it have to like ease and use the toilet like you know there's only so many hours I'm awake for so like let's just say a maximum of like 40 hour 45 minutes in the hours and I said all right let's just say 50 who knows you're dictating on the toilet we do it so you could just do 60 but whatever I did and every day like you're going flat out seven days a week dictating nonstop as like what's my monthly API bill gonna be at this price and it came out to like seven to your 80 bucks and I was like well that would be an extraordinary amount of dictation and I would hope that there was some compelling reason more worth more than 70 dollars that I embarked upon that so given that that's kind of the max point for me I said that's actually very very affordable now you're gonna if you want to spec out the costs and you want to do the post processing that I really do feel as valuable that's gonna cost more as well on a less you're using Gemini which needless to say is a random person sitting in Jerusalem I have no affiliation nor with Google nor anthropic nor Gemini nor any major tech vendor for that matter um I like Gemini not so much as a everyday model um it's kind of underwhelmed in that respect I would say but for multimodal I think it's got a lot to offer and I think that the transcribing functionality whereby it can um process audio with the system prompt and both give you a transgression that's cleaned up that reduces two steps to one and that for me is a very very big deal and uh I feel like even Google hasn't really sort of thought through how useful the that modality is more kind of use cases uh you can achieve with it because I found in the course of this year just an endless list of really kind of system prompt system prompt stuff that I can say okay I've used it for a capture context data for AI which is literally I might speak for if I wanted to have a good bank of context data about who knows my childhood uh more realistically maybe my career goals something that would just be like really boring to type out so I'll just like sit in my car and record it for 10 minutes and that's 10 minutes you get a lot of information in um emails which is short text uh just there is a whole bunch and all these workflows kind of require a little bit of treatment afterwards and different treatment my context pipeline is kind of like just extract the bare essential so you end up with me talking very loosely about sort of what I've done in my career where I've worked where my light to work and it goes it condenses that down to very robotic language that is easy to chunk parts and maybe put into a vector database Daniel has worked in technology Daniel is a has been working in martino stuff like that that's not how you would speak um but I figure it's probably easier to parse for after all robots so we've almost got to 20 minutes and this is actually a success because I waste 20 minutes of my uh of the evening speaking into microphone and the levels were shot and it uh it was clipping and I said I can't read you an evaluation I have to be fair I have to give the models a chance to do their thing uh what am I hoping to achieve in this okay my fine shun was a dud as mentioned deep gram sdt I'm really really hopeful that this prototype will work and it's a built in public open source so anyone is welcome to use it if I make anything good but that was really exciting for me last night when after hours of um try my own prototype seeing someone just made something that works like that you know you're not going to have to build a custom condo environment and image I have AMD GPU which makes things much more complicated I didn't find it and I was about to give up and I said all right let me just give deep grams Linux thing a shot and if this doesn't work um I'm just going to go back to trying to vibe code something myself and when I ran the script I was using cloud code to do the installation process it ran the script and oh my gosh it works just like that uh the tricky thing for all those ones and all the nitty-ditty-ditty-gritty details um was that I don't think it was actually struggling with transcription but pasting wailant makes life very hard and I think there was something not running at the right time anyway deep gram I looked at how they actually handled that because it worked out in the box when other stuff didn't and it was quite a clever little mechanism and but more so than that the accuracy was brilliant now what am I doing here this is going to be a 20 minute audio uh sample and I'm I think I've done one or two of these before but I did it with sure snappy voice notes this is kind of long form this actually might be a better approximation for what's useful to me than voice memos like I need to buy three beaters of moat tomorrow and peter bread which is probably how like half my voice note voice notes sound like if anyone were to I don't know like find my phone they'd be like this is the most boring person in the world although actually there are some like kind of uh journaling thoughts as well but it's a lot of content like that and the probably for the evaluation the most useful thing is slightly obscure tech github new cleano hugging face not so obscure that it's not going to have a chance of knowing it but hopefully sufficiently well known that the models should get it uh I tried to do a little bit of speaking really fast and speaking very slowly I would say in general I've spoken deliver this at a faster pace than I usually would go into strong coffee flowing through my bloodstream and the thing that I'm not going to get into spanish mark is background noise which is my first take that I had to get rid of my wife come in with my son and for a good night kiss and that actually would have been super helpful to get in because it was non-diarray sorry if we had diarrayization a female I could say I want the male voice and that wasn't intended for transcription um and we're not going to get background noise like people hunking their horns which is something I've done in my main data set where I am trying to go back to some of my voice notes annotate them and run a benchmark but this is going to be just a pure quick test and as someone I'm working on a voice note idea that's my sort of end motivation besides thinking it's an absolute outstanding technology that's coming to viability and really I know the same as cheesy can actually have a very transformative effect it's you know voice technology has been life changing for folks living with disabilities and I think there's something really nice about the fact that it can also benefit you know folks who are able bodies and like we can all in different ways um make this tech as useful as possible regardless of the exact way that we're using it um and I think there's something very powerful in that and it can be very cool um I see huge potential what excites me about voice tech a lot of things actually firstly the fact that it's cheap and accurate as I mentioned at the very start of this um and it's getting better and better with stuff like accent handling um I'm not sure my fight my fine tune will actually ever come to fruition in the sense that I'll use it day to day as I imagine and get likes you per flawless words error rates because I'm just kind of skeptical about local speech attacks as I mentioned and I think the pace of innovation and improvement in the models the main reasons for fine tuning from what I've seen have been people who are something that really blows blows my mind about ASR is the idea that it's inherently ailing you or multilingual phonetic based so as folks who use speak very obscure languages that there may be there there might be a positive training data or almost none at all and therefore the accuracy is significantly reduced or folks in very critical environments I know there are you this is used extensively in medical transcription and dispatch your work as um you know the call sentries who send out ambulances etc where accuracy is absolutely paramount and in the case of doctors radiologists they might be using very specialized vocab all the time so those are kind of the main two things and I'm not sure that really just for trying to make it better on a few random tech words with my slightly I mean I have an accent but like not you know an accent that a few other million people have ish I'm not sure that my little fine tune is going to actually like the bump in word error reduction if I ever actually figure out how to do it and get it up to the cloud by the time we've done that I suspect that the next generation of ASR will just be so good that it will kind of be well that would be cool for a dive but I'll just use this instead so that's going to be is for today's episodes of voice training data single long shot evaluation who am I going to compare whisper is always good as a benchmark but I'm more interested in seeing whisper head-to-head with two things really one is whisper variance so you've got these projects like faster whisper distilled whisper it's a bit confusing there's a whole bunch of them and the emerging ASRs which are also a thing my intention for this is I'm not sure I'm going to have the time in any point of the foreseeable future to go back to this whole episode and create a proper source truth or I fix everything might do it if I can get one transcriptions that's sufficiently close to perfection but what I would actually love to do on hogging face I think would be a great probably how I might visualize this is having the audio waveform play and then have the transcript for each model below it and maybe even a like you know two scale and maybe even a local one as well like local whisper versus open AI API etc and I can then actually listen back to segments or anyone who wants to can listen back to segments of this recording and see where a particular model to struggle with others didn't as well as the sort of headline finding of which had the best WER but that would require the source of truth okay that's it hope this was I don't know maybe useful for other folks interested in STT you want to see that I always feel think I've just said as something I didn't intend to STT I said for those isn't carefully including hopefully the models themselves this has been myself Daniel Rosal for more um jumbled repositories about my uh roving interests in AI but particularly agentic mcp and voice tech you can find me on github hogging face where else daniel rosal dot com which is my personal website as well as this podcast whose name I sadly cannot remember until next time thanks for listening
Hello and welcome to a audio data set consisting of one single episode of a non-existent podcast. Or, I may append this to a podcast that I set up recently regarding my with my thoughts on speech tech and AI in particular. More AI in generative AI, I would say. But in any event, the purpose of this voice recording is actually to create a lengthy voice sample for a quick evaluation of back of the envelope evaluation as they might say for different speech attacks models. And I'm doing this because I thought I had made a great breakthrough in my journey with speech tech and that was succeeding in the elusive task of fine-tuning whisper. Whisper is, and I'm going to just talk, I'm trying to mix up, I'm going to try a few different styles of speaking, I might whisper something at some points as well. And I'll go back to speaking loud in a different part. So I'm going to send really like a crazy person because I'm also going to try to speak at different pitches and cadences in order to really try to put a speech attacks model through its pieces, which is trying to make sense of, is this guy just ramling on and coherently in one long sentence or are these just actually a series of
step of standalone, standalone, standalone sentences. And how is it going to handle step alone? That's not a word. What happens when you use speech attacks and you use a fake word. And then you're like, wait, that's not actually that word doesn't exist. How does AI handle that? And these and more are all the questions that I'm seeking to answer in this training data. Now, why was it trying to find you to whisper? And what is whisper, as I said, I'm going to try to record this at a couple of different levels of technicality for folks who are in the normal world and not totally stocked down the rabbit hole of AI. What you have to say is a really wonderful rabbit hole to be, to be done. It's a really interesting area and speech and voice attack is the aspect of it that I find actually most, I'm not sure I would say the most interesting because there's just so much that is fascinating in AI. But the most that I find the most personally transformative in terms of the impact that it's had on my daily work life and productivity and how I sort of work. And I'm persevering hard with the task of training, yes, a good solution working for Linux. Would you have anyone actually, does listen to this not just for the training data and for the actual content? This is this is sparked. I had, besides the fine tune not working. Well, that was the failure. I used Claude Codes because one thing's these days that there is nothing sort of solving you know, the reason of life or something at that's flawed and agentic AI can't do, which is not really the case. It does seem that way sometimes, but it fails a lot as well. And this is one of those instances where last week I put together an hour of voice training data. Basically speaking just random things for three minutes and it was actually kind of tedious because the texts were really weird. Some of them were it was like AI generated. I tried before to reach Sherlock Holmes for an hour and I just couldn't, I was so bored after 10 minutes that I was okay knowing just gonna have to find something out to read. So I used a created with AI Studio vibe code as a synthetic text generator, which actually I thought was probably a better way of doing it because it would give me more short samples with more varied content. So I was like okay, give me a voice note like I'm recording an email, give me a short story to read, give me pros to read. So I came up with all these different things and they added a little timer to it so I could see how to let us say well as to one hour. And I spent like an hour, one afternoon or probably two hours by the time you do retakes on whatever because you want to, it gave me a source of truth which I'm not sure if that's the scientific way to approach this topic of gathering training data, but I thought made sense. I have a lot of audio data from recording voice notes which I've also kind of used being experimenting with using for a different purpose. It's slightly different annotating task types. It's more text classification experiment or well it's more than that actually I'm working on a voice app so it's a prototype I guess is really more accurate. But you can do that and you can work backwards. You listen back to a voice note and you painfully go through one of those transcribing where you start and stop and scrub around in a new fixie areas but it's really really pouring to do that. So I thought it would be less tedious in the long term if I just recorded this source of truth. So it gave me these three minutes snippets. I recorded them it saved an MP3 and a TXT in the same folder and I created an error that data. So I was very hopeful that I could actually find you in whisper. I want to find you in whisper because when I got into voice tech last November my wife was in the US and I was alone at home and when crazy people like me do really wild things like use voice tech technology that was basically when I started doing it I didn't feel like a crazy person speaking to myself and my expectations weren't that high. I used speech tech now and again tried to write as like it would be really cool if you just like speak into your computer and whatever I tried I used that had Linux support was just it was not good basically and this blew me away from the first go. I mean it wasn't 100% accurate either the box and it took work but it was good enough that there was a solid foundation and it kind of passed that pivot point that it's actually worth doing this. There's a point where it's so like the transcript is you don't have to get 100% accuracy for it to be worth your time for it's speech tech to be worth while it isn't your productivity but you do need to get above let's say 85%. If it's 60% or 50% you inevitably say screw it I'll just type it because you end up missing errors in the transcript and it becomes actually worse you end up in a worse position than you started with that's been my experience. So I was like oh this is actually really really good now how did that happen? The answer is ASR whisper being open sourced and the transformer architecture if you want to go back to the to the underpinnings which really blows my mind and it's on my list to reto that paper all you need is attention as attentively as can be done with my limited brain because it's super super high level stuff super advanced stuff I mean but that I think of all the things that are fascinating about the sudden rise and AI and the dramatic capabilities I find fascinating a few people are like hang on you've got this thing that can speak to you like a chatbot and LLM and then you've got image generation okay so first of all those two things on the surface have nothing in common so like how did that just happen all at the same time and then when you extend that further you're like Suno right you can sing a song an AI will like come up with an instrumental and then you've got whisper and you're like wait a second how did all this stuff like if it's all AI what's like there has to be some commonality otherwise these are for these are totally different technologies on the surface of it and the transformer architecture is as far as I know the answer and I can't even say you can't even pretend that I really understand what the transformer architecture means in depth but I have scandice and as I said once a printed I'm really kind of think over it's at some point and I'll probably feel bad about myself I think because when those guys in the in their 20s like that's crazy I think I asked you to be one who were the who wrote that paper and how old were they when it was published in arcs of I was expecting like I don't know what do you imagine I personally imagine kind of like you know you have these breakthroughs during Covid and things like that were like these kind of really obscure scientists who are like in their 50s and they've just kind of been laboring labs and we're really in writing and publishing and kind of obscure academic publications and they finally like hit a bake or when a noble apprise and then their household names so that was kind of what I had that was the mental image I'd formed of the birth of arcs of like I wasn't expecting 20 somethings in San Francisco though I thought that was both very very funny very cool and actually kind of inspiring it's nice to think that people who you know just you might put them in the kind of milieu or bubble or world that you are in or incredibly in through you know the series of connections that are coming up with such literally world changing innovations so that was I thought anyway that that that was cool okay voice training data how are we doing we're at by 10 minutes and I'm still talking about voice technology so whisper was brilliant and I was so excited that I was my first instinct was to like guess like oh my gosh I have to get like a really good microphone for this so I didn't go on a spending spree because I said I'm gonna have to just wait a month and see if I still use this and it just kind of became it's become really part of my daily routine like if I'm writing an email I'll record a voice note and then I've developed and it's nice to see that everyone is like developing the same things in parallel like that's my kind of a weird thing to say but when I look I kind of came when I started working on this these prototypes on GitHub which is where I just kind of share very freely and loosely ideas and you know first iterations on concepts and for one of a better word I called it like LLM post processing or cleanup or basically a system prompt that after you get back the raw text from whisper you run it through model and say okay this is crappy text like add sentence structure and you know fix it up and now when I'm exploring the different tools that are out there the people of built I see quite a number of projects have basically you know done the same thing last that we missed construit I'm not saying for a millisecond that I inspired them I'm sure this has been a thing that's been integrated into tools for a while but it's it's the kind of thing that when you start using these tools every day the need for it is almost instantly apparent because text that doesn't have any punctuation or paragraph spacing takes a long time to you know it takes so long to get it into a presentable email that again it's it moves speech tech into that before that inflection point we're like that's just not worth it it's like it's just be quicker to type this so it's it's a big it's a little touch that actually is a big deal so I was on whisper and I've been using whisper and I kind of early on find a couple of tools I couldn't find what I was looking for on Linux which is basically just something that'll run in the background you'll give it an API key and it'll just like transcribe with like a little key to start and start the dictation and the issues where I discovered that like most people involved in creating these projects were very much focused on local models running whisper locally because you can and I tried that a bunch of times and just never got results that were as good as the cloud and when I began looking at the cost of the speech tech to API is what I was spending I just thought there is it's actually in my opinion just one of the better deals in API spending and in cloud like it's just not that expensive for very very good models that are much more you know you're going to be able to run the full model the latest model versus whatever you can run on your average GPU unless you want to buy crazy GPU it doesn't really make sense to me and I've been a lot of things that I know is kind of like a very much just everything the people just don't want their voice data and their voice leaving their local environment maybe for regular few reasons as well but I'm not in that I'm neither really care about people listening to my gross readest consisting of reminding myself that I need to buy more beer cheetos and hummus which is kind of the three three staples of my diet during periods of poor nutrition but the kind of stuff that I transcribe it's just not it's not a it's not a privacy thing that sort of sensitive about and I don't do anything so you know sensitive or secure that requires air capping so I looked at the pricing and especially the kind of older model mini some of them were very very affordable and I did it back the I did a calculation once with chatchewet and I was like okay this is the API price for I can't remember whatever the model was let's say I just go out at like nonstop which really happens probably I would say an average I might dictate 30 to 60 minutes per day if I was probably summing up the emails documents outlines which is a lot but it's still a fairly modest amount and I was like well some days I do go on like one or two days right being usually when I'm like kind of I'd do the house and just have something like I've nothing else to do like if I met a hospital we've a newborn and you're waiting for like eight hours and hours for an appointment and I would probably have listened to podcasts before becoming a speech fanatic and I'm like oh wait let me just get down let me just get these ideas out of my head and that's when I'll go on my speech spinches but those were like once every few months like not frequently but I said okay let's just say if I'm going to price out cloud STT if I was like dedicated every second of every waking hour to transcribing for some odd reason I mean it have to like ease and use the toilet like you know there's only so many hours I'm awake for so like let's just say a maximum of like 40 hour 45 minutes in the hour then I said all right let's just say 50 who knows you're dictating on the toilet we do it so you could just do 60 but whatever I did and every day like you're going flat out seven days a week dictating nonstop it's like what's my monthly API bill gonna be at this price and it came out to like 70 or 80 bucks and I was like well that would be an extraordinary amount of dictation and I would hope that there were some compelling reason more worth more than 70 dollars that I embarked upon their project so given the dots kind of the max point for me I said that's actually very very affordable now you're gonna if you want to specide the costs and you want to do the post processing that I really do feel as valuable that's gonna cost them more as well on the last you're using Gemini which needless to say is a random person sitting in Jerusalem I have no affiliation nor with Google nor anthropic nor Gemini nor any major tech vendor for that matter I like Gemini not so much as a everyday model it's kind of underwhelmed in that respect I would say but for multi-model I think it's got a lot to offer and I think that the transcribing functionality whereby it can process audio with the system prompt and both give you transcription that's cleaned up that reduces two steps to one and that for me is a very very big deal and I feel like even Google hasn't really sort of thought through how useful the downloadability is more kind of use cases you can achieve with it because I found in the course of this year just an endless list of really kind of system prompt system prompt stuff that I can say okay I've used the trick capture context data for AI which is literally I might speak for if I wanted to have a good bank of context data about who knows my childhood more realistically maybe my career goals something that would just be like really boring to type out so I'll just like sit in my car and record it for 10 minutes and that's 10 minutes you get a lot of information in emails which is short text just there is a whole bunch and all these workflows kind of require a little bit of treatment afterwards and different treatment my context pipeline is kind of like just to extract the bare essential so you end up with me talking very loosely about sort of what I've done in my career where I've worked where my light work and it goes it condenses that down to very robotic language that is easy to chunk parse and maybe put into a vector database Daniel has worked in technology Daniel is a has been working in Martin you know stuff like that that's not how you would speak but I figure it's probably easier to parse for after all robots so we've almost got to 20 minutes and this is actually a success because I waste 20 minutes of my of the evening speaking into my headphone and the levels were a shot and it was clipping and I said I can't read each and evaluation I have to be fair I have to give the models a chance to do their thing at what am I hoping to achieve in this okay my function was a daughter's mentioned Deep Gram STT I'm really really hopeful that this prototype will work and it's a build and public open source so anyone is welcome to use it if I make anything good but that was really exciting for me last night when after hours of trying my own prototype seeing someone just made something that works like that you know you're not going to have to build a custom condo environment and image I have AMD GPU which makes things much more complicated I didn't find it and I was about to give up and I said all right let me just give deep grams Linux thing a shot and if this doesn't work I'm just going to go back to trying to vibe code something myself and when I ran the script I was using cloud code to do the installation process it ran the script and oh my gosh it works just like that the tricky thing for all those who wants to know all the nitty ditty degree nitty gritty details was that I don't think it was actually struggling with transcription but pasting wailant makes life very hard and I think there was something not running in the right time anyway Deep Gram I looked at how they actually handled that because it worked out of the box one other stuff didn't and it was quite a clever little mechanism and but more and so it's not the accuracy was brilliant now what am I doing here this is going to be a 20 minutes audio sample and I'm I think I've done one or two of these before but I did this with sure snappy voice notes this is kind of long form it's actually might be a better approximation for what's useful to me than voice mammals like I need to buy three beaters of moke tomorrow and Peter bread which is probably how like half my voice note send like if anyone were to I don't know like find my phone they be like this is the most boring person in the world although actually there are some like kind of journaling thoughts as well but it's a lot of content like that and the probably for the evaluation the most useful thing is slightly obscure tech get hub the clean hooking face not so obscure that is not going to have a chance of knowing it but hopefully sufficiently well known that the models should get us I tried to do a little bit of speaking really fast and speaking very slowly I would say in general I've spoken delivered this at a faster pace than I usually would owe into strong coffee if flowing through my bloodstream and the thing that I'm not going to get into spent work is background noise which in my first take that I had to get rid of my wife come in with my son and for a good night kiss and that actually would have been super helpful to get in because it was non-diorized or if we had diorization a female I could say I want the male voice and that wasn't intended for transcription and I'm not going to get background noise like people honking their horns which is something of done to my main data set where I am trying to go back to some of my voice notes and I take them and run a benchmark but this is going to be just a pure quick test and someone working on a voice note idea that's my sort of end motivation besides thinking it's an astudiate standing technology that's coming to viability and really I know the same it's cheesy can actually have a very transformative effect it's you know voice technology has been life changing for folks living with disabilities and I think there's something really nice about the fact that it can also benefit you know folks who are able bodies and like we can all in different ways
make this tech as useful as possible regardless of the exact way that we're using it and I think there's something very powerful in that and it can be very cool I see huge potential what excites me about voice tech a lot of things actually firstly the fact that it's cheap and accurate as I mentioned at the very start of this and it's getting better and better with stuff like accent handling I'm not sure my fine tune will actually ever come to fruition in the sense that I'll use it day to day as I imagine and get likes you per flawless words error rates because I'm just kind of skeptical about local speech attacks as I mentioned and I think the pace of innovation and improvement in the models the main reason for fine tuning from what I've seen have been people who are something that really blows my mind about ASR is the idea that it's inherently alien fuel or multilingual fanatic based so as folks who use speak very obsturately languages that there might be a policy of training data or almost none at all and therefore the accuracy is significantly reduced or folks in very critical environments I know they're you this is using extensively in medical transcription and dispatcher work as you know the call centers use send out ambulances etc where accuracy is absolutely paramount and in the case of doctors radiologists there might be using very specialized vocab all the time so those are kind of the main two things and I'm not sure that really just for training make it better on a few random tech words with my slightly I mean I have an accent but like not you know an accent that a few other million people have ish I'm not sure that my little fine tune is going to actually like the bump in word error reduction if I ever actually figure out how to do it and get it up to the by the time I've done that I suspect that the next generation of ASR will just be so good that it will kind of be no well that would be cool for a doubt but all this uses instead so that's going to be is for today's episodes of voice training data single long-shaw evaluation who am I going to compare with supposed to be a benchmark but I'm more interested in seeing whisper head to head with two things ready one is whisper variants so you've got these projects like faster whisper distilled whisper it's a bit confusing there's a whole bunch of them and the emerging ASR is which are also a thing my intention for this is I'm not sure I'm going to have the time in any point of the foreseeable future to go back to this whole episode and create a proper source true through a fix everything might do it if I can get one transcriptions as efficiently close to perfection but what I would actually love to do on hugging face I think would be a great probably how I might visualize this is having the audio waveform play and then have the transcript for each model below it and maybe even a like you know two scale and maybe even a local one as well like local whisper versus open AI API etc and I can then actually listen back to segments or anyone who wants to can listen back to segments of this recording and see where a particular model struggled and others didn't as well as the sort of headline finding of which had the best WER but that would require the source of truth okay that's it I hope this was I don't know maybe useful for other folks interested in STT you want to see that I always feel think I've just said something I didn't and do STT I said for those it's in carefully including hopefully the models themselves this has been myself Daniel Rosal for more jumbled repository is about my roving interest in AI but particularly agentic mcp and voice tech you can find me on get up hugging face where else Daniel Rosal.com which is my personal website as well as this podcast whose name I sadly cannot remember but the next time thanks for listening
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
58

Space using danielrosehill/Podcast-ASR-Evaluation 1

Collections including danielrosehill/Podcast-ASR-Evaluation