"The effects of parental occupational exposures on autism spectrum disorder severity and skills in cognitive and adaptive domains in children with autism spectrum disorder" https://www.sciencedirect.com/science/article/pii/S143846392...
The person leading this study, Erin C. McCanlies, was forced out of the CDC, her division eliminated and she went into early retirement from the CDC. https://www.psypost.org/scientist-who-linked-autism-to-chemi...
---
"The findings suggest that workplace exposures to several specific chemical classes were associated with worse outcomes in children with ASD. One of the strongest and most consistent patterns involved plastics and polymer chemicals. Fathers’ exposure to plastics was associated with lower scores across all cognitive and adaptive skill domains, including language, motor coordination, daily living skills, and overall functioning. When both parents were exposed, the deficits appeared to compound.
“I was surprised how strongly and consistently plastics and polymers stood out as being linked with multiple developmental and behavioral outcomes including irritability, hyperactivity, and daily living,” McCanlies told PsyPost.
Exposure to ethylene oxide—commonly used in hospital sterilization—was also linked to more severe autism symptoms, lower expressive language abilities, and poorer adaptive functioning. Similarly, parental exposure to phenol (used in construction, automotive, and some consumer products) and pharmaceuticals was associated with increased ASD severity and more pronounced behavioral challenges, especially hyperactivity and stereotyped behavior.
While the results do not imply that all children exposed to these chemicals will develop more severe symptoms, the patterns suggest that early life exposure to workplace toxicants may amplify certain developmental difficulties in children who already meet criteria for ASD. The study provides one of the most detailed looks to date at how parental occupation may relate not just to diagnosis, but to variation in how autism is expressed.
“Our findings suggest that certain parental workplace exposures may be related not just to autism, but to worse symptoms and autism behaviors,” McCanlies explained."
Link for the 120B version: https://huggingface.co/lmstudio-community/gpt-oss-120b-MLX-8...
Its taking 21 gb of memory on my 64 gb mbp, still tuning it and settling on context size, temp, and other settings.
My comment from yesterday:
"thanks openai for being open ;) Surprised there are no official MLX versions and only one mention of MLX in this thread. MLX basically converst the models to take advntage of mac unified memory for 2-5x increase in power, enabling macs to run what would otherwise take expensive gpus (within limits). So FYI to any one on mac, the easiest way to run these models right now is using LM Studio (https://lmstudio.ai/), its free. You just search for the model, usually 3rd party groups mlx-community or lmstudio-community have mlx versions within a day or 2 of releases. I go for the 8-bit quantizations (4-bit faster, but quality drops). You can also convert to mlx yourself...
Once you have it running on LM studio, you can chat there in their chat interface, or you can run it through api that defaults to http://127.0.0.1:1234
You can run multiple models that hot swap and load instantly and switch between them etc.
Its surpassingly easy, and fun.There are actually a lot of cool niche models comings out, like this tiny high-quality search model released today as well (and who released official mlx version) https://huggingface.co/Intelligent-Internet/II-Search-4B
Other fun ones are gemma 3n which is model multi-modal, larger one that is actually solid model but takes more memory is the new Qwen3 30b A3B (coder and instruct), Pixtral (mixtral vision with full resolution images), etc. Look forward to playing with this model and see how it compares."
It's an interesting proxy, but idk how reliable it'd be.
So FYI to any one on mac, the easiest way to run these models right now is using LM Studio (https://lmstudio.ai/), its free. You just search for the model, usually 3rd party groups mlx-community or lmstudio-community have mlx versions within a day or 2 of releases. I go for the 8-bit quantizations (4-bit faster, but quality drops). You can also convert to mlx yourself...
Once you have it running on LM studio, you can chat there in their chat interface, or you can run it through api that defaults to http://127.0.0.1:1234
You can run multiple models that hot swap and load instantly and switch between them etc.
Its surpassingly easy, and fun.There are actually a lot of cool niche models comings out, like this tiny high-quality search model released today as well (and who released official mlx version) https://huggingface.co/Intelligent-Internet/II-Search-4B
Other fun ones are gemma 3n which is model multi-modal, larger one that is actually solid model but takes more memory is the new Qwen3 30b A3B (coder and instruct), Pixtral (mixtral vision with full resolution images), etc. Look forward to playing with this model and see how it compares.
LM Studio community: 20b: bhttps://huggingface.co/lmstudio-community/gpt-oss-20b-MLX-8b... 120b: https://huggingface.co/lmstudio-community/gpt-oss-120b-MLX-8...
So FYI to any one on mac, the easiest way to run these models right now is using LM Studio (https://lmstudio.ai/), its free. You just search for the model, usually 3rd party groups mlx-community or lmstudio-community have mlx versions within a day or 2 of releases. I go for the 8-bit quantizations (4-bit faster, but quality drops). You can also convert to mlx yourself...
Once you have it running on LM studio, you can chat there in their chat interface, or you can run it through api that defaults to http://127.0.0.1:1234
You can run multiple models that hot swap and load instantly and switch between them etc.
Its surpassingly easy, and fun.There are actually a lot of cool niche models comings out, like this tiny high-quality search model released today as well (and who released official mlx version) https://huggingface.co/Intelligent-Internet/II-Search-4B
Other fun ones are gemma 3n which is model multi-modal, larger one that is actually solid model but takes more memory is the new Qwen3 30b A3B (coder and instruct), Pixtral (mixtral vision with full resolution images), etc. Look forward to playing with this model and see how it compares.
After a few rounds of AI generating AI content from AI content, I'm sure it could eventually become slop...like the model collapse lol idk.
"AI models collapse when trained on recursively generated data" - https://www.nature.com/articles/s41586-024-07566-y