Readit News logoReadit News
eega commented on Mercedes beats Tesla to autonomous driving in California   theregister.com/2023/06/0... · Posted by u/belter
AussieWog93 · 3 years ago
Recently I was reading that in Europe, most cars sold don't even have automatic transmission, yet the drivers there are much safer than than people in the US or Australia.

There was some speculation that because they shift gears manually, they have to pay more attention to the road and can't do things like drink coffees while driving. They also enjoy driving a lot more than we do.

I wonder if other technology would have a similar effect of ultimately making driving less safe and enjoyable. I've never driven a super-modern car, but I do know that I zone out a bit when cruise control is on...

eega · 3 years ago
There a loads of cars with automatic transmission in Europe - especially the newer, medium to premium level cars. But it definitely isn‘t as prevalent as in the US.

It is no problem at all to drink coffee or even eat while driving a manual car, so no idea where this comes from ;)

eega commented on Mercedes beats Tesla to autonomous driving in California   theregister.com/2023/06/0... · Posted by u/belter
asdff · 3 years ago
Lead footing is a phenomenon exclusive to automatics. Basically, you are on a 35mph road maybe, you lead foot some, the car shifts from third to the overdrive gear, now you are going like 45 or 50mph with the engine barely making any noise, you thinking this is just fine. On a manual, you'd know if you were in third and went to 50mph—the engine would be howling at another couple thousand rpm depending on the car. Going into your overdrive gear would require a conscious effort to shift into it.
eega · 3 years ago
That‘s just silly - I drive my (manually shifted) car basically solely in the highest gear in non-city traffic and have the same effect. Nothing to do with automatic, but with powerful cars.
eega commented on Three Companies Impersonated Millions to Influence Internet Policy   ag.ny.gov/press-release/2... · Posted by u/RecycledEle
Jupe · 3 years ago
> LCX and Lead ID were responsible for many of these fake comments, letters, and petition signatures. Across four advocacy campaigns in 2017 and 2018, LCX fabricated consumer responses used in approximately 900,000 public comments submitted to the Environmental Protection Agency (EPA) and the Bureau of Ocean Energy Management (BOEM) at the U.S. Department of the Interior. Similarly, in advocacy campaigns between 2017 and 2019, Lead ID fabricated more than half a million consumer responses. These campaigns targeted a variety of government agencies and officials at the federal and state levels.

Wow. Just wow. How can anyone at these companies be avoiding jail time?

And who hired them? Is there no money trail?

eega · 3 years ago
Not only no jail, but the fine is laughable as well …
eega commented on I lost everything that made me love my job through Midjourney   old.reddit.com/r/blender/... · Posted by u/Fraterkes
figassis · 3 years ago
I use however a lot
eega · 3 years ago
Same
eega commented on Docker   computer.rip/2023-03-24-d... · Posted by u/hundt
ghshephard · 3 years ago
Nomad is awesome and works at scale. The engineers continue to battle harden it and it’s a joy to work with. You do have to manage things like service discovery (usually with consul) and traffic routing separately - but the integration with vault is sublime.

About the only real negative of Nomad is that it doesn’t have the mindshare that k8s does, so you don’t see the amount of developer engagement in extending it the way you do in the k8s SIGs. Also, being an expert in Nomad doesn’t give you the same number of career opportunities, and on the other side - there aren’t umpteen thousand nomad SREs the way there are with k8s - so getting someone up to speed can take a couple months (but this system is very well defined, well documented, and small enough that any half talented engineer can master it very quickly)

Nomad does have the very important advantage that Hashicorp stands behind the product - so if anything goes awry, you’ve got a support team and escalation that will jump on and root cause/resolve any issue, usually within a matter of hours and even in the really squirrelly cases (that you are only likely to see when when you are managing many, many thousands of nodes in a cluster) within days.

eega · 3 years ago
Less career opportunities need not be bad if they are better payed and the companies are more accommodating.
eega commented on We ran a phone check at a Y Combinator event in SF   blog.getclearspace.com/we... · Posted by u/bilsbie
toastal · 3 years ago
It’s a shame to see cash going away in some places. The alternatives are too traceable & used to sell your purchasing behaviors, & I’m not keen on that. We’re talking about the security of our phone’s data, but then giving up on our financial data. (Not to say there isn’t a place, but whole-sale ‘cashless’ is worrying).
eega · 3 years ago
Well, get better data protection. Cash is a pain in the ass …
eega commented on Introducing ChatGPT and Whisper APIs   openai.com/blog/introduci... · Posted by u/minimaxir
polygamous_bat · 3 years ago
> I have no idea how OpenAI can make money on this. This has to be a loss-leader to lock out competitors before they even get off the ground.

The worst thing that can happen to OpenAI+ChatGPT right now is what happened to DallE 2, a competitor comes up with an alternative (even worse if it's free/open like Stable Diffusion) and completely undercuts them. Especially with Meta's new Llama models outperforming GPT-3, it's only a matter of time someone else gathers enough human feedback to tune another language model to make an alternate ChatGPT.

eega · 3 years ago
Yeah, might be worried about open, crowd sourced approaches like Open Assistant (https://open-assistant.io/).
eega commented on Replacing a SQL analyst with 26 recursive GPT prompts   patterns.app/blog/2023/01... · Posted by u/kvh
eega · 3 years ago
> Playing around with GPT at this level you get the feeling that “recursive GPT” is very close to AGI. You could even ask GPT to reinforcement learn itself, adding new prompts based on fixes to previous questions. Of course, who knows what will happen to all this when GPT-4 drops.

Leaning out of the window way too much here. This has nothing to do with AGI, which would require an intrinsic understanding of not only SQL, but over, well, everything, not just a well-defined and easily checkable field like SQL.

Regarding GPT-4 - OpenAI‘s CEO Sam Altman stated that the expectations regarding GPT-4 are way over-hyped. People on the Internet talk as if AGI is coming in the guise of GPT-4, but it‘s „just“ going to be an incrementally better evolution of GPT-3.5.

Mind, I‘m in no way saying that LLM‘s aren’t exciting - they are to me - or that they will not change the world, but leave your horses in the stable.

eega commented on Show HN: New AI edits images based on text instructions   github.com/brycedrennan/i... · Posted by u/bryced
do_anh_tu · 3 years ago
Or, if there is a Colab version, I’d happy to pay Google for premium GPU.
eega · 3 years ago
Well, just open a new GPU Colab and create a cell mit „!pip install imaginairy“ and you should be good to go …
eega commented on We’ve filed a law­suit chal­leng­ing Sta­ble Dif­fu­sion   stablediffusionlitigation... · Posted by u/zacwest
eega · 3 years ago
Funny they didn’t include OpenAI in the lawsuit …

u/eega

KarmaCake day68February 13, 2018View Original