The busses were constantly late (sometimes waiting an hour for a bus from Darlinghurst to the eastern suburbs on the weekends) Unbelievable noisy. Walking through the CBD on a workday morning was deafening. Engines with no acoustic dampening at all. Reckless drivers. Gave up trying to ride a bike. Just too dangerous and aggressive bus and car drivers :(
However, this could be explained by today's GPS tracking data, rather than improvements in reliability. When you open your transit app, you want to know when the next bus is, so you can find an alternative if the wait is too long. When it tells you the next one is in 3 minutes (which is an accurate estimate because of the GPS), you don't actually care if that bus is running 18 minutes later than originally scheduled.
For the bus I use for my commute, I don't leave either the house or the office until I see its GPS tracker pass certain points of the route. I've never had to wait for more than 3 minutes at a bus stop doing that. On occasions where there is no GPS feed, I treat that bus as "theoretical", and don't risk going out to try to catch it at its scheduled time, unless I'm desperate. But every time I did risk it, it ended up arriving right on schedule.
So I'd say the experience of catching buses has profoundly improved, but not necessarily because the reliability has improved.
And 10 years ago, we didn't have Opal readers, which are great, since together with having digital driving licenses on our phones, it has allowed many of us to completely forgo carrying a wallet.
Bus drivers are still as reckless and grumpy as they used to be though.
> ts = 1571595618.0
This a timestamp, there is no TZ data required.
> x = datetime.utcfromtimestamp(ts)
Now for some reason the developer cares about UTC all of a sudden?
If the developer doesn't care about the TZ, they should use:
datetime.fromtimestamp(ts)
The main issue is that .utcfromtimestamp and .fromtimestamp both return naive datetimes, instead of aware ones with the timezone property set to UTC or the local timezone respectively.
# what I used to do:
>>> datetime.utcnow().replace(tzinfo=timezone.utc)
# what I'll be doing from now on:
>>> datetime.now(timezone.utc)
During polyphasic experiments in my youth, my biggest obstacle was always securing reliable conditions to allow the daytime naps.
It was a scam.
I wonder how it was done, Etherscan didn't show anything and compiling it led to a few bytes of difference between what was compiled and what was deployed.
Fortunately, there are tools like Ganache, which you can run with `ganache-cli --fork` to reliably emulate locally what will happen when transactions are sent to mainnet. I would accept no substitute approach when dealing with suspect contracts.
For one, they don't rely on "number go up" to make money, they use strategies similar to those used by HFT firms to make money from market inefficiencies. They generally don't have loyalty to a particular crypto platform, they just go wherever they can find a competitive advantage. They also tend to be much less politically outspoken and often left-leaning, in contrast to the vocal libertarian views that permeate the rest of the field.
They also most certainly aren't "imagining" making money. Most of their strategies are essentially elaborate forms of arbitrage, which are risk-free sources of profit by nature (until out-competed). Their only losses come from fees paid for deploying strategies that turn out to be unsuccessful. Even fees for failed transactions are pretty much a non-issue these days because of Flashbots.
This got me thinking. Voice recognition is basically a commodity now .. there are open source AI engines that can do it offline really well. So the recognition part is solved, you can just grab it from your distro's package manager. Now there's just the language part.
Thing is, I don't want to speak to my computer using English. Aside from the enormous practical problems in natural language processing you've outlined, I just find the idea creepy[1].
What I want is to unambiguously tell it to do arbitrary things. I.e. use it as an actual computer, not a toy that can do a few tricks. I.e. actually program it. In some kind of Turing complete shell language that is optimized for being spoken aloud. You would speak words into the open source voice recognizer, it writes those to stdout, then an interpreter reads from stdin and executes the instructions.
Is there any language like this? What should it look like?
And yeah that would take effort to learn to use it right, just like any other programming language; so be it. This would be a hobbyist thing.
[1] https://i.kym-cdn.com/photos/images/original/002/054/961/748...
You knew that you were drawing something designed for a computer to recognise as unambiguously as possible, while being efficient to draw quickly and easy to learn for you. I feel like that's the kind of notion that voice interfaces should somehow expand upon.
[1] https://en.wikipedia.org/wiki/Graffiti_(Palm_OS)