Since we're all clutching our pearls, we might as well clutch all of them.
Since we're all clutching our pearls, we might as well clutch all of them.
Please try to be more helpful and less presumptive, especially when someone is asking to learn.
None of these comments are fixing issues or trying to make a difference. Sending the product back is a really good idea, especially if this change in terms means you can get a refund even if you've had it a long time.
I've always wondered why professors and supervisors, after experiencing these abuses themselves, continue to perpetuate them.
The only explanation I've come up with is that the system naturally weeds out those who resist or speak up by stalling their careers. As a result, it selects for individuals who don’t make trouble, those who passively obey and endure even the worst forms of dysfunction.
In the end, this leads to the normalization of abuse, with people rationalizing it as "if I went through it, others should too", a way to protect their own ego.
The only thing even worse is when the abuse turns passive-aggressive: denying opportunities without ever saying it outright, hostility disguised as kindness, ambiguous and demoralizing feedback, delaying responses, making people miss crucial deadlines, assigning pointless or overwhelming tasks. They excel at this too.
If I ever had children, I would never let them attend a European university.
I've seen frequently that talented technical contributors are academically handicapped because they bring too much value to the lab for them to graduate quickly. I've personally had my own funding threatened if I didn't work "at least 60 hours each week" on my ex-advisors work (which was in no way related to my degree or research interests). I was fortunate to find another advisor and funding source quickly, but most advisors are absolutely profiting in their career off the backs of their students; leveraging both carrot and stick to fuel their ambition. It's a problem of modern academia and I'm not sure how to fix it.
Words in sentences kinda forms graphs, referencing other words or are leafs being referenced, both inside sentences and between sentences.
Given the success of the attention mechanism in modern LLMs, how well would they do if you trained a LLM to process an actual graph?
I guess you'd need some alternate tokenizer for optimal performance.
There are also graph tokenizers for using more standard transformers on graphs for doing things like classification, generation, and community detection.
https://www.ers.usda.gov/data-products/ag-and-food-statistic...