Readit News logoReadit News
janussunaj commented on Feynman: I am burned out and I'll never accomplish anything (1985)   asc.ohio-state.edu/kilcup... · Posted by u/ent101
tnecniv · 2 years ago
I do love animals, but I’ve always been intimidated about the responsibility of being the person solely in charge of that animal’s life. I’ve always struggled with routines and such (I found out in adulthood I have ADHD).

I have thought about it lately, though. Maybe having that responsibility will cause other things to click into place? However, I’m worried about it not working and impacting a life beyond my own.

janussunaj · 2 years ago
Your concern is more of a sign that you'd be a responsible pet parent. I don't have experience with dogs (rescue or not), but caring for a cat is relatively straightforward and can bring lots of joy. Writing this with a cat sleeping on my lap, who showed up in my backyard a few years ago.
janussunaj commented on AI is replacing customer service jobs across the globe   washingtonpost.com/techno... · Posted by u/bookofjoe
iinnPP · 2 years ago
“It was [a] no-brainer for me to replace the entire team with a bot,” he said in an interview, “which is like 100 times smarter, who is instant, and who cost me like 100th of what I used to pay to the support team.”

This was obviously going to happen, though the amount of pushback I received for the exact observation was less obvious.

The main argument was hallucinations, but I have corrected enough people in customer service to know that the humans are wrong more than the ChatGPT(even 3.5)

janussunaj · 2 years ago
The reason humans are wrong is because they are not trained, are paid peanuts, and overworked. Basically they are already being treated like bots.

Even a mediocre chatbot with some in-domain training data is fine for their purpose, which is not to maximally help the customer.

janussunaj commented on John Carmack on AI   twitter.com/ID_AA_Carmack... · Posted by u/miohtama
SilverBirch · 2 years ago
This is the core irony of the AI enthusiasts - by massively over-stating the capabilities of AI they provide the perfect ammunition to AI doomsters. If the AI enthusiasts focused on what can reasonably done with AI today they would have a lot less to worry about from regulators, but instead they keep making claims that are so ambitious that actually.. yes, we probably should heavily regulate it if the claims are true.
janussunaj · 2 years ago
My cynical take is that this is actually the PR strategy at OpenAI (and others that had to jump on the train): by talking up vague "singularity" scenarios, they build up the hype with investors.

Meanwhile, they're distracting the public away from the real issues, which are the same as always: filling the web with more garbage, getting people to buy stuff, decreasing attention spans and critical thinking.

The threat of job loss is also real, though we should admit that many of the writing jobs eliminated by LLMs are part of the same attention economy.

janussunaj commented on Loneliness reshapes the brain   quantamagazine.org/how-lo... · Posted by u/theafh
000ooo000 · 3 years ago
Sorry you experienced something similar. It sure is unpleasant. I can feel the bitterness but I don't think I'm ready to admit I can't work through it myself. Hope things are on the up for you.
janussunaj · 3 years ago
"Work through it myself" is almost a contradiction in this case. Definitely look into getting some form of counseling you can afford.

Another thing you can do in parallel is reaching out to other friends/acquaintances you might have lost touch with. Tell them honestly what you appreciate about them and that you'd like to revive the friendship.

In both cases you'll feel awkward and uncertain. It's normal. Also, don't expect any single relationship to fulfill your unmet needs. But making yourself vulnerable to others is the beginning of "working through it yourself".

janussunaj commented on Ask HN: Burnt out from big tech. What's next?    · Posted by u/janussunaj
janussunaj · 3 years ago
(OP here) I wanted to avoid a wall of text, so I'll elaborate here on where I'm coming from.

My complaints with FAANG have to do with perverse incentives that reward nonsensical decisions, poorly thought-out and over-engineered projects, grandiose documents, duplication of work, selective reporting of metrics, etc.

The few times I had a really good manager, a sane environment, and fulfilling work only lasted until the next reorg. It seems like most organizations are either stressful with a lot of adversarial behavior, or have almost nothing to do but depressing busywork. I also find the social aspect lackluster if not downright alienating. I feel at a dead end both in career growth and opportunities to learn on the technical side. I could roll the dice with another team change, but I'm not eager at the prospects.

Most of my work experience is in ML, but I don't want to box myself into that. I find the current hype around generative models insufferable, and the typical ML project today consists of somewhat sloppy Python and a lack of good engineering practices. I'm also tired of the increasingly long and opaque feedback loops (come up with an idea, wait for your giant model to retrain, hope that some metric goes up). I'm still passionate about some aspects (e.g., learning representations, knowledge grounding, sane ML workflows).

I hear that academia has similar issues (though again I mostly know about ML), and I imagine lots of industries have worse conditions than tech. I realize that sloppiness and politics are a fact of life, so I'm wary of falling into the "grass is greener" trap.

u/janussunaj

KarmaCake day126February 9, 2023View Original