Readit News logoReadit News
muds commented on Work after work: Notes from an unemployed new grad watching the job market break   urlahmed.com/2025/11/05/w... · Posted by u/linkregister
crystal_revenge · 2 months ago
I always feel a bit conflicted when I read these experience from new grads: on the one hand, there's no question the job market today is not the one they signed up for; on the other, the expectation of recent grads is completely alien to me as someone who entered the job market in the shadow of the dotcom bust.

The biggest thing that seems foreign to me is the expectation that "I'm a fit for the job, I should therefore get the job". When I entered the workforce every job was a competition.

The process was the companies would post a job, and then collect resumes until they felt they had a sufficient number of candidates to proceed (or some arbitrary deadline was reached). If you were the only good candidate, it was very common that they would feel there wasn't enough competition and would simply restart the search. This process could easily take months. Then, if there were enough qualified candidates, you would get the interview but you would always be competing with 3-5 other people that the company felt where roughly equal matches.

I had worked part-time (not purely interned) in my field for 3 years, so had plenty of experience at the entry level. Even then competition was stiff, and an interviewer simply not vibing with you was enough to lose a job.

I vividly recall having my target pay set at 2x minimum wage, eating canned stew because that's all I could afford and about to lower my standards when I finally got a call back after months of searching. So as a new grad with reasonably similar qualifications to the author, I was pumped to be making 2x minimum wage out of college.

And at the time none of my classmates considered it to be a challenging job market.

Flash-forward a few years and my younger siblings faced the GFC, I knew of tons and tons of really bright new grads that simply couldn't get work for years. I was shocked that most of them didn't complain too much and where more than willing to suck up to corporate America as soon as a job was offered (I personally thought a bit more resistance was in order).

I'm not sure I really have a point other than to emphasize how truly bizarre the last decade has been where passing leetcode basically meant a 6 figure salary out of undergrad. I'm typically a doomer, but honestly I think it's hard to disambiguate what part of this job market is truly terrible and what part is people who have spend most of their lives living in unprecedentedly prosperous times.

muds · 2 months ago
Much of your argument rests on refuting the notion that the author feels "entitled" to a high-paying job. In that point, I agree with you. Any engineering undertaking is most productive when it is a meritocratic and competitive pursuit. People that feel "entitled" to an engineering job unfortunately need a reality check on their true competitiveness.

However, that doesn't seem like the authors core point. The authors' core point here is that they feel that the level of competition is past the point where their meritocratic achievements have any weight because to be competitive in the present marketplace, they need to either (1) inherently be _born_ in a different country with a low cost of living, (2) give up certain basic freedoms, (3) settle for a less skillful job where they can be an outlier in the distribution (for how long?) etc. -- all of which, to them, feel less meritocratic.

Of course, they might also feel "entitled" to a job, but that's not the interesting part of their argument (at least to me).

muds commented on On the paper “Exploring the MIT Mathematics and EECS Curriculum Using LLMs” [pdf]   people.csail.mit.edu/asol... · Posted by u/jlaneve
ttpphd · 3 years ago
I think you missed the point that data needs to be collected and presented ethically. It's not about it being a work in progress and not peer reviewed.
muds · 3 years ago
I agree that the data collection process wasn't ethical, and the professor should definitely be reprimanded for that. It's extremely sad that the coauthors weren't aware of this as well. And I feel terrible for the undergrads: their first research experience was publically rebuked for no fault of their own.

However, there is no shortage of projects with sketchy data collection methodologies on arXiv that haven't received this amount of attention. The point of putting stuff on arXiv _is_ that the paper will not pass / has not passed peer review in its current form! I might even call arXiv a safe space to publish ideas. We all benefit from this: a lot of interesting papers are only available on arxiv v.s. being shared between specific labs.

I'm concerned that this fiasco was enabled by this new paradigm in AI social media reporting, where a project's findings are amplified and all the degrees of uncertainty are repressed. And I'm honestly not sure how to best deal with this other than either amplifying the uncertainty and jankyness in the paper itself to an annoyingly noticeable level, or just going back to the old way of privately sharing ideas.

Maybe this is the best case scenario for these sorts of papers? They pushed a paper on a public journal, and got a public "peer review" of the paper. Turns out the community voted "strong reject;" and it also turns out that the stakes for public rejection are (uncomfortably, IMO) higher than for a normal rejection. Maybe this causes the researchers to only publically release better research, or (more likely) this causes the researchers to privately release all future papers.

muds commented on On the paper “Exploring the MIT Mathematics and EECS Curriculum Using LLMs” [pdf]   people.csail.mit.edu/asol... · Posted by u/jlaneve
muds · 3 years ago
Putting papers and code on arXiv shouldn't be punished. The incentive to do this is to protect your idea from getting scooped, and also to inform your close community on interesting problems that you're working on and get feedback. ArXiv is meant for work in progress ideas that won't necessarily stand the peer review process, but this isn't really acknowledged properly on social media. I highly doubt the Twitter storm would have been this intense if the twitter posts explicitly acknowledged this as a "Draft publication which hints as X." But I admit that pointing fingers at nobody in general and social media specifically is a pretty lazy solution.

The takeaway IMO seems to be to prepend the abstract with a clear disclaimer sentence conveying the uncertainty of the research in question. For instance, adding a clear "WORKING DRAFT: ..." in the abstract section.

Deleted Comment

muds commented on No, GPT4 Can’t Ace MIT   flower-nutria-41d.notion.... · Posted by u/YeGoblynQueenne
iudqnolq · 3 years ago
ImageNet has five orders of magnitude more answers, which I would assume makes QA a completely different category of problem.

The authors could probably have carefully review all ~300 of their questions. If they couldn't they could have just reduced their sample size to say 50.

muds · 3 years ago
I admit that Imagenet isn't the best analogy here. But I'm pretty confident that this data cleaning issue would be caught in peer review. The biggest issue which I still don't understand was the removal of the test set. That was bad practice on the authors' part.
muds commented on No, GPT4 Can’t Ace MIT   flower-nutria-41d.notion.... · Posted by u/YeGoblynQueenne
muds · 3 years ago
I'm not sure what to make of this post. There is always a degree of uncertainty with the experimental design and it's not surprising that there are a couple of buggy questions. Imagenet (one of the most famous CV datases) at this point is known to have many such buggy answers. What is surprising is the hearsay that plays out on social media that blows the proportion of the results out of the water and leads to opinion pieces like these targeting the authors instead.

Most of the damning claims in the conclusion section (Obligatory: I haven't read the paper entirely, just skimmed it.) usually get ironed out in the final deadline run by the advisors anyway. I'm assuming this is a draft paper for the EMNLP deadline this coming Friday published on arxiv. So this paper hasn't even gone through the peer review process yet.

Deleted Comment

muds commented on Sile: A Modern Rewrite of TeX   sile-typesetter.org/... · Posted by u/signa11
daly · 3 years ago
TeX and Literate Programming (and Lisp) are my fundamental, day-to-day tools.

Code it, explain it, generate a Literate PDF containing the code.

The programming cycle is simple. Write the code in a latex block. Run make. The makefile extracts the code from the latex, compiles it, runs the test cases (also in the latex), and regenerates the PDF. Code and explanations are always up to date and in sync.

I have found no better toolset.

muds · 3 years ago
I've been struggling with keeping track of research experiments and code at the same time. This seems pretty cool! I like how this method is language agnostic and uses "matured" tools. Question: I'd love to give this a try; do you have any public code snippets?
muds commented on Test scores are not irrelevant   dynomight.net/are-tests-i... · Posted by u/colinprince
ramraj07 · 3 years ago
Uhm, what exactly are you trying to get at? I said subject GRE is a very good measure of eventual success in academia, do you have a solid response to that or just a rambling tirade?

Paper authorship if the student is the first author shows grit and “gumption” I suppose? As if that’s what’s needed in academia at this moment (it’s important but not the main requirement). But almost no undergrad gets a first author paper. They get mentioned in the middle because they ran a bunch of sds gels. I wasn’t even interested in trying to become a professor and I got 10 papers before I finished my PhD, do you know how many I (or any of the folks I actually know who are now professors) had during our undergrad? Zero. And not for lack of trying. You know who actually got papers? The son of the department head.

muds · 3 years ago
The original comment, to me, reads more like "subject GRE is a definitive measure of eventual success in academia." I was arguing against the definitive part. Thanks for the clarification. Maybe it might be a good measure for your cohort, you, and people in similar situations.

> Almost no undergrad gets a first author paper

Maybe this is different in different fields but we have a lot of undergraduate first author papers in programming languages and machine learning. I mean -- through and through -- undergraduate students bringing up a topic, getting guidance from professors and senior PhD students, getting results by the end of the semester, and publishing results by next year. Even the people who end up "running the sds slides" either fall out by next year or end up working towards their own first author publications. I've always chalked this up to the experimental setup cost being very cheap in CS compared to in the "hard sciences" so most undergraduate students are already comfortable with all the tools they need to do research.

> I wasn't interested in trying to become a professor

I think this is precisely the variable that a standardized test cannot account for! I feel an "authentic" undergraduate research experience is successful if it helps students realize if research is right for them or not.

> ... papers ... the son of the department head...

I see where your frustration is stemming from. Sorry this was your first experience with undergraduate research.

u/muds

KarmaCake day451May 22, 2019View Original