* Multi-hop reasoning rarely works with real data in my case. * Impossible to define advanced metrics over the whole dataset. * No async support
I have a recording I've been sitting on for 2 years(a guest lecture which a friend recorded) which contains a very heavy amount of background noise, where you can just barely make out what is being said by the lecturer. I wonder if there is any hope I will ever be able to read a transcript from it.
I can figure out what the lecturer is saying (maybe only because I have some context about what he is talking about), but it is too painful to sit through 2 hours of it and try to transcribe it.
I tried uploading the audio file to this service, but basically get nothing useful returned to me.
I hadn't considered acoustics being a whole class of use case for maintenance, but imagine if we had one of these in our pockets how much better our built environment would get, since everyone would be able to see issues & point them out. Visualizing the invisible is a powerful way to get change.
Also that train wheel example was a real missed opportunity to say 'squeaky wheel gets the oil'.
e.g Tests I want applied to anything retrieved from the database. What I'd like is to optimise the prompt around those (or maybe even the tests themselves) but I can't seem to express that in DSPy signatures.