Hi, Felix from Anthropic here. I work on Claude Cowork and Claude Code.
Claude Cowork uses the Claude Code agent harness running inside a Linux VM (with additional sandboxing, network controls, and filesystem mounts). We run that through Apple's virtualization framework or Microsoft's Host Compute System. This buys us three things we like a lot:
(1) A computer for Claude to write software in, because so many user problems can be solved really well by first writing custom-tailored scripts against whatever task you throw at it. We'd like that computer to not be _your_ computer so that Claude is free to configure it in the moment.
(2) Hard guarantees at the boundary: Other sandboxing solutions exist, but for a few reasons, none of them satisfy as much and allow us to make similarly sound guarantees about what Claude will be able to do and not to.
(3) As a product of 1+2, more safety for non-technical users. If you're reading this, you're probably equipped to evaluate whether or not a particular script or command is safe to run - but most humans aren't, and even the ones who are so often experience "approval fatigue". Not having to ask for approval is valuable.
It's a real trade-off though and I'm thankful for any feedback, including this one. We're reading all the comments and have some ideas on how to maybe make this better - for people who don't want to use Cowork at all, who don't want it inside a VM, or who just want a little bit more control. Thank you!
FWIW I think many of us would actually very much love to have an official (or semi official) Claude sandboxing container image base / vm base. I wonder if you all have considered making something like the cowork vm available for that?
> It's a real trade-off though and I'm thankful for any feedback, including this one.
Feedback: If your app is going to use 10GB of storage, tell the user in advance and give them a one-click way to remove it. Just basic manners. Don't pick your nose at the dinner table. It's not hard, just common decency.
> even the ones who are so often experience "approval fatigue". Not having to ask for approval is valuable.
This is by and large a short-term pro for Anthropic. It's often not one for the user, and in the long-term, often barely even for the company. In any case, it's a great example of putting Anthropic priorities above the users'. Which is fine and happens all the time, but in this case just isn't necessary. Similar to the AGENTS.md case. We're on the cusp of a pattern establishing here and that's something you'll want to stop before it's ossified.
agree to this if their target market is only developers
but over 90% of their users are non technical so removing that approval step is the correct move in a product sense.
users install cowork for the magic, 10gb is negligible. these days even steam games are 50gb+ and you care more about the gameplay than the disk space.
I accidentally clicked the Claude Cowork button inside the Claude desktop app. I never used it. I didn't notice anything at the time, but a week later I discovered the huge VM file on my disk.
It would be really nice to ask the user, “Are you sure you want to use Cowork, it will download and install a huge VM on your disk.”
Same. I work on M3 Pro with 512GB disk, and most of the time I have aroung 50GB free that goes down to 1GB often quite quick (I work with video editing and photos and caches are agressive there). I use apps like Pretty Clean and some own scripts (for brew clean, deleting Flutter builds, etc). So every 10GB used is a big deal for me.
Also discovered that VM image eating 10GB for no reason. I have Claude Desktop installed, but almost never use it (mostly Claude Code).
I tried to use it right after launch from within Claude Desktop, on a Mac VM running within UTM, and got cryptoc messages about Apple virtualization framework.
That made me realize it wants to also run a Apple virtualization VM but can’t since it’s inside one already - imo the error messaging here could be better, or considering that it already is in a VM, it could perhaps bypass the vm altogether. Because right now I still never got to try cowork because of this error.
Does UTM/Apple's framework not allow nested virtualization? If I remember correctly from x86(_64) times, this is a thing that sometimes needs to be manually enabled.
I would look at how podman for Mac manages this; it is more transparent about what's happening and why it needs a VM. It also lets you control more about how the VM is executed.
> (2) Hard guarantees at the boundary: Other sandboxing solutions exist, but for a few reasons, none of them satisfy as much and allow us to make similarly sound guarantees about what Claude will be able to do and not to.
This is the most interesting requirement.
So all the sandbox solutions that were recently developed all over GitHub, fell short of your expectations?
This is half surprising since many people were using AI to solve the sandboxing issue have claimed to have done so over several months and the best we have is Apple containers.
What were the few reasons? Surely there has to be some strict requirement for that everyone else is missing.
But still having a 10 GB claude.vmbundle doesn't make any sense.
Claude Cowork grabs local DNS resolution on macOS which conflicts with secure web gateway aka ZTNA aka SASE products such as Cloudflare Warp which do similar. The work-around is to close Cowork, let Warp grab mDNSResponder's attention first, then restart Claude Desktop, or some similar special ordering sequence. It's annoying, but you could say that about everything having to do with MITM middleboxes.
Do you think it would be possible in the future to maybe add developer settings to enable or disable certain features, or to switch to other sandboxing methods that are more lightweight like Apple seatbelt for example?
They're using the harnesses provided by the respective underlying Operating Systems to do virtualization.
I'd like to explore that topic more too, but I feel like the context of "we deferred to MacOS/Windows" is highly relevant context here. I'd even argue that should be the default position and that "extensive justification" is required to NOT do that.
To a firm with such policies, to allow Cowork outside the VM should be strictly worse.
Ironically, VMs are typically blocked because the infosec team isn't sure how to look inside them and watch you, unlike containers where whatever's running is right there in the `ps` list.
They don't look inside the JVM or .exes either, but they don't think about that the same way. If they treat an app like an exe like a VM, and the VM is as bounded as an app or an exe, with what's inside staying inside, they can get over concerns. (If not, build them a VM with their sensors inside it as well, and move on.)
This conversation can take a while, and several packs of whiteboard markers.
Speaking as a tiny but regulated SMB that's dabbling in skill plugins with Cowork: we strongly appreciate and support this stance. We hope you don't relax your standards, and need you not to. We strongly agree with (1), (2), and (3).
If working outside the sandbox becomes available, Cowork becomes a more interesting exfil vector. A vbox should also be able to be made non-optional — even if MDM allows users to elevate privileges.
We've noticed you're making other interesting infosec tradeoffs too. Your M365 connector aggressively avoids enumeration, which we figured was intentional as a seatbelt for keeping looky-loos in their lane.* Caring about foot-guns goes a long way in giving a sense of you being responsible. Makes it feel less irresponsible to wade in.
In the 'thankful for feedback' spirit, here's a concrete UX gap: we agree approval fatigue matters, and we appreciate your team working to minimize prompts.
But the converse is, when a user rejects a prompt — or it ends up behind a window — there's no clear way to re-trigger. Claude app can silently fail or run forever when it can't spin up the workspace, wasn't allowed to install Python, or was told it can't read M365 data.
Employees who've paid attention to their cyber training (reasonably!) click "No" and then they're stuck without diagnostics or breadcrumbs.
For a CLI example of this done well, see `m365-cli`'s `auth` and `doctor` commands. The tool supports both interactive and script modes through config (backed by a setup wizard):
Similarly, first party MCPs may run but be invisible to Cowork. Show it its own logs and it says "OK, yes, that works but I still can't see it, maybe just copy and paste your context for now." A doctor tool could send the user to a help page or tell them how to reinstall.
Minimal diagnostics for managed machines — running without local admin but able to be elevated if needed — would go a long way for the SMBs that want to deploy this responsibly.
Maybe a resync perms button or Settings or Help Menu item that calls cowork's own doctor cli when invoked?
---
* When given IDs, the connector can read anything the user can anyway. We're able to do everything we need, just had to ship ID signposts in our skill plugin that taps your connector. Preferred that hack over a third party MCP or CLI, thanks to the responsibility you look to be iteratively improving.
It's incredible how many applications abuse disk access.
In a similar fashion, Apple Podcasts app decided to download 120GB of podcasts for random reason and never deleted them. It even showed up as "System Data" and made me look for external drive solutions.
I use my MacBook for a mix of dev work and music production and between docker, music libraries, update caches and the like it’s not weird for me to have to go for a fresh install once every year or two.
Once that gets filled up, it’s pretty much impossible to understand where the giant block of memory is.
Yep, it is an awful situation. I'm increasingly becoming frustrated with how Apple keeps disrespecting users.
I downloaded several MacOS installers, not for the MacBook I use, but intending to use them to create a partitioned USB installer (they were for macOS versions that I could clearly not even use for my current MacBook). Then, after creating the USB, since I was short of space, I deleted the installers, including from the trash.
Weirdly, I did not reclaim any space; I wondered why. After scratching my head for a while, I asked an LLM, which directed me to check the system snapshots. I had previously disabled time machine backup and snapshots, and yet I saw these huge system snapshots containing the files I had deleted, and kicker was, there was no way to delete them!
Again I scratched my head for a while for a solution other than wiping the MacBook and re-installing MacOS, and then I had the idea to just restart. Lo and behold, the snapshots were gone after restarting. I was relieved, but also pretty pissed off at Apple.
Because Apple differentiates their products by their storage sizes, they also sell iCloud subscription. There is zero (in fact negative) incentive to respect your storage space.
I had the same problem and had some luck cleaning things up by enabling "calculate all sizes" in Finder, which will show you the total directory size, and makes it a bit easier to look for where the big stuff is hiding. You'll also want to make sure to look through hidden directories like ~/Library; I found a bunch of Docker-related stuff in there which turned out to be where a lot of my disk space went.
You can enable "calculate all sizes" in Finder with Cmd+J. I think it only works in list view however.
The trick is to reboot into recovery partition, disable SIP, then run OmniDiskSweeper as root (as in `sudo /Applications/OmniDiskSweeper.app/Contents/MacOS/OmniDiskSweeper`). Then you can find all kinds of caches that are otherwise hidden by SIP.
I should not have to hack through /Libary files to regain data on a TB drive because Osx wanted to put 200gbs of crap there in an opaque manner and not give the user ANY direct way to regain their space.
The exclude for Volumes is necessary because otherwise ncdu ends up in an infinite loop - "/Volumes/Macintosh\ HD/Volumes/" can be repeated ad nauseam and ncdu's -x flag doesn't catch that for whatever reason.
Don't run "du -h ~/Library/Messages" then, I've mentioned that many times before and it's crazy to me to think that Apple is just using up 100GB on my machine, just because I enable iMessage syncing and don't want to delete old conversations.
One would think that's a extremely common use case and it will only grow the more years iMessage exists. Just offload them to the cloud, charge me for it if you want but every other free message service that exists has no problem doing that.
sudo du -sh ~/Library/Messages
Password:
du: /Users/cvaske/Library/Messages: Operation not permitted
Wow, SIP is a bit more insidious than I remember. Maybe I should try it in Terminal.app rather than a third party app... I wonder if there will ever be a way to tell the OS "this action really was initiated by the user, not malware, let them try to do what they say they want to do"
Edit: investigating a bit more, apparently the lack of a sudo-equivalent, an "elevate this one process temporarily" command is intentional in the design, so that malicious scripts can't take advantage of that "this is really the user" approval path. I can't say I agree with that design decision, but at least it's an ethos.
Same with photos. You can enable the option to offload but there’s no way to control how much is used locally. I don’t know why messages does that either. Also no easy way to remove the hundreds of thousands of photos in messages across all chats.
This one drives me nuts. Not just on Mac, also on iPhone/iPad. It's 2026, and 5G is the killer feature advertised everywhere. There's no reason to default to downloading gigabytes of audio files if they could be streamed with no issue whatsoever.
I'm on 5G right now and it just struggled to load the HN front page due to local network congestion. At times of day when it's not congested it reaches 60-90Mbyte/s in the same physical location
Spotify just gave up while trying to show me my podcasts. I can't listen to anything not already downloaded right now.
Yet at 3am I'll be able to download a 100GB LLM without difficulty onto the same device that can't stream a podcast right now.
Unfortunately I don't think 5G is the streaming panacea you have in mind. Maybe one day...
I had the same problem but with a bad time machine backup. ~300GB of my 512GB disk, just labeled the generic "System Data". I lost a day of work over it because I couldn't do Xcode builds and had to do a deep dive into what was going on.
That's one way to drive sales for higher priced SSDs in Apple products. I'm pretty sure that that sort of move shows up as a real blip on Apple's books.
Not sure what you have against it. Works great for me. No subscription required. And if I do want to pay for ad free shows and support creators it's easy to do so.
Use whatever you like but I don't think Podcast app users are rare by any stretch of the imagination.
AFAIK the native Podcast app for iPhone is the only way to make PC-phone podcast file syncing work. This stops you downloading the same podcast file twice, once on your PC and once on your phone.
It's generally a good app. People in the tech community like Overcast, but I've always found its UI completely illogical. Apple Podcasts is organized like I'd expect a podcast app to be.
The market for Cowork is normals, getting to tap into a executive assistant who can code. Pros are running their consumer "claws" on a separate Mac Mini. Normals aren't going to do that, and offices aren't going to provision two machines to everyone.
The VM is an obvious answer for this early stage of scaled-up research into collaborative computing.
Yes! Whether VPS or local VM, this is a thing for good reasons.
Some reasons aren't even optional. Small but regulated entities exist, and most "Team" sized businesses aren't in Google apps or "the cloud" as they think about it, but are in M365, and do pay for cyber insurance.
Cowork with skills plugins that leverage Python or bash is a remarkably enabling framework given how straightforward it is. A skill engineer can sit with an individual contributor domain expert, conversationally decompose the expert's toil into skills and subcommands, iterate a few times, and like magic the IC gets hours back a day.
Cowork is Agents-On-Rails™ for LLM apps, like Rails was to PHP for web apps.
The VM makes that anti-fragile.
For any SaaS builders reading this: by far most white collar small business work is in Microsoft Office. The scarce "Continue with Microsoft" OIDC reaches more potential SMB desks than the ubiquitous "Continue with Google" and you don't have to learn the legacy SAML dance.
Anthropic seems to understand this. It's refreshing when a firm discovers how to cater to the 25–150 seat market. There's an uncanny valley between early adopters and enterprise contracts, but the world runs on SMBs.
I concur. I don't want to install libraries on my host machine that I won't use for anything other than development, e.g., Node.js.
On macOS, Lima has been a godsend. I have Claude Code in an image, and I just mount the directory I want the VM to have access to. It works flawlessly and has been a replacement for Vagrant for me for some time. Though, I owe a lot to Vagrant. It was a lifesaver for me back in the day.
I prefer devcontainers for more involved project setups as they keep it lighter than introducing a VM. It’s also pretty easy to work with Docker (on your host) with the docker-outside-of-docker feature.
However, I’m also curious about using NixOS for dev environments. I think there’s untapped potential there.
we love nix for dev environments, and highly recommend it. many other problems go away. don't see that as what's being solved here, though.
containers contain stuff the way an open bookcase contains books, they're just namespaces and cgroups on a file system overlay, more or less, held together by willpower not boundaries:
as a firm required to care about infosec, we appreciate the stance in their (2). and MacOS VMs are so fast now, they might as well be containers except, you know, they work. (if not fast, that should be fixed.)
that said, yes, running local minikube and the like remain incredibly useful for mocking container envs where the whole environment is inside a machine(s) boundary. containers are _almost_ as awesome as bookcases…
I guess it could warn about it but the VM sandbox is the best part of Cowork. The sandbox itself is necessary to balance the power you get with generating code (that's hidden-to-user) with the security you need for non-technical users. I'd go even further and make user grant host filesystem access only to specific folders, and warn about anything with write access: can think of lots of easy-to-use UIs for this.
I believe that employees in Anthropocs use CC to develop CC now.
AI really give much user ability to develop a completed product, but the quality is decreasing. Professional developers will be in demand when the products/features become popular.
First batch of users of new products need to take more responsibility to test the product like a rats in lab
I can’t see how these 1st party products can compete against open source. Why would anyone chose a shit proprietary solution when the free one is better
> AI really give much user ability to develop a completed product, but the quality is decreasing. Professional developers will be in demand when the products/features become popular.
Looking at the amount of issues, outages and rookie mistakes the employees are making leads me to believe that most of them are below junior level.
If anyone were to re-interview everyone at Anthropic for their own roles with their own interview questions, I would guess that >75% of them would not pass their own interviews.
The only team the would pass them are the Bun team and some other of the recently acquired startups.
While the whole "Claude Code is just like a game engine" tweet was silly, this comment seems too derisive. I highly doubt engineers at Anthropic are lacking in talent.
I literally spent the last 30 mins with DaisyDisk cleaning up stuff in my laptop, I feel HN is reading my mind :)
I also noticed this 10GB VM from CoWork. And was also surprised at just how much space various things seem to use for no particular reason. There doesn't seem to be any sort of cleanup process in most apps that actually slims down their storage, judging by all the cruft.
Even Xcode. The command line tools installs and keeps around SDKs for a bunch of different OS's, even though I haven't launched Xcode in months. Or it keeps a copy of the iOS simulator even though I haven't launched one in over a year.
Yup it uses Apple Virtualization framework for virtualization. It makes it so I can't use the Claude Cowork within my VMs and that's when I found out it was running a VM, because it caused a nested VM error. All it does is limit functionality, add extra space and cause lag. A better sandbox environment would be Apple seatbelt, which is what OpenAI uses, but even that isn't perfect: https://news.ycombinator.com/item?id=44283454
I don’t have an opinion on how they should handle the nested VMs probably, but I very much disagree that Seatbelt is better. Claude Code (aka `claude`) uses it, and it’s barely good for anything.
Out of curiosity, why are you running Cowork inside a VM in the first place? What does that get you that letting Cowork use its own VM wouldn’t?
OpenAI Codex CLI was able to use it effectively, so at least AI knows how to use it. Still, its deprecated and not maintained, Apple needs to make something new soon.
Claude Cowork uses the Claude Code agent harness running inside a Linux VM (with additional sandboxing, network controls, and filesystem mounts). We run that through Apple's virtualization framework or Microsoft's Host Compute System. This buys us three things we like a lot:
(1) A computer for Claude to write software in, because so many user problems can be solved really well by first writing custom-tailored scripts against whatever task you throw at it. We'd like that computer to not be _your_ computer so that Claude is free to configure it in the moment.
(2) Hard guarantees at the boundary: Other sandboxing solutions exist, but for a few reasons, none of them satisfy as much and allow us to make similarly sound guarantees about what Claude will be able to do and not to.
(3) As a product of 1+2, more safety for non-technical users. If you're reading this, you're probably equipped to evaluate whether or not a particular script or command is safe to run - but most humans aren't, and even the ones who are so often experience "approval fatigue". Not having to ask for approval is valuable.
It's a real trade-off though and I'm thankful for any feedback, including this one. We're reading all the comments and have some ideas on how to maybe make this better - for people who don't want to use Cowork at all, who don't want it inside a VM, or who just want a little bit more control. Thank you!
https://code.claude.com/docs/en/devcontainer
It does work but I found pretty quickly that I wanted to base my robot sandbox on an image tailored for the project and not the other way around.
> All-in-One Sandbox for AI Agents that combines Browser, Shell, File, MCP and VSCode Server in a single Docker container.
Feedback: If your app is going to use 10GB of storage, tell the user in advance and give them a one-click way to remove it. Just basic manners. Don't pick your nose at the dinner table. It's not hard, just common decency.
> even the ones who are so often experience "approval fatigue". Not having to ask for approval is valuable.
This is by and large a short-term pro for Anthropic. It's often not one for the user, and in the long-term, often barely even for the company. In any case, it's a great example of putting Anthropic priorities above the users'. Which is fine and happens all the time, but in this case just isn't necessary. Similar to the AGENTS.md case. We're on the cusp of a pattern establishing here and that's something you'll want to stop before it's ossified.
but over 90% of their users are non technical so removing that approval step is the correct move in a product sense.
users install cowork for the magic, 10gb is negligible. these days even steam games are 50gb+ and you care more about the gameplay than the disk space.
hn should really touch more grass.
It would be really nice to ask the user, “Are you sure you want to use Cowork, it will download and install a huge VM on your disk.”
Also discovered that VM image eating 10GB for no reason. I have Claude Desktop installed, but almost never use it (mostly Claude Code).
That made me realize it wants to also run a Apple virtualization VM but can’t since it’s inside one already - imo the error messaging here could be better, or considering that it already is in a VM, it could perhaps bypass the vm altogether. Because right now I still never got to try cowork because of this error.
Claude mangles XML files with <name> as an XML Tag to <n>
https://news.ycombinator.com/item?id=47113548
This is the most interesting requirement.
So all the sandbox solutions that were recently developed all over GitHub, fell short of your expectations?
This is half surprising since many people were using AI to solve the sandboxing issue have claimed to have done so over several months and the best we have is Apple containers.
What were the few reasons? Surely there has to be some strict requirement for that everyone else is missing.
But still having a 10 GB claude.vmbundle doesn't make any sense.
Deleted Comment
Also, please allow Cowork to work on directories outside the homedir!
I'd like to explore that topic more too, but I feel like the context of "we deferred to MacOS/Windows" is highly relevant context here. I'd even argue that should be the default position and that "extensive justification" is required to NOT do that.
Ironically, VMs are typically blocked because the infosec team isn't sure how to look inside them and watch you, unlike containers where whatever's running is right there in the `ps` list.
They don't look inside the JVM or .exes either, but they don't think about that the same way. If they treat an app like an exe like a VM, and the VM is as bounded as an app or an exe, with what's inside staying inside, they can get over concerns. (If not, build them a VM with their sensors inside it as well, and move on.)
This conversation can take a while, and several packs of whiteboard markers.
Deleted Comment
Speaking as a tiny but regulated SMB that's dabbling in skill plugins with Cowork: we strongly appreciate and support this stance. We hope you don't relax your standards, and need you not to. We strongly agree with (1), (2), and (3).
If working outside the sandbox becomes available, Cowork becomes a more interesting exfil vector. A vbox should also be able to be made non-optional — even if MDM allows users to elevate privileges.
We've noticed you're making other interesting infosec tradeoffs too. Your M365 connector aggressively avoids enumeration, which we figured was intentional as a seatbelt for keeping looky-loos in their lane.* Caring about foot-guns goes a long way in giving a sense of you being responsible. Makes it feel less irresponsible to wade in.
In the 'thankful for feedback' spirit, here's a concrete UX gap: we agree approval fatigue matters, and we appreciate your team working to minimize prompts.
But the converse is, when a user rejects a prompt — or it ends up behind a window — there's no clear way to re-trigger. Claude app can silently fail or run forever when it can't spin up the workspace, wasn't allowed to install Python, or was told it can't read M365 data.
Employees who've paid attention to their cyber training (reasonably!) click "No" and then they're stuck without diagnostics or breadcrumbs.
For a CLI example of this done well, see `m365-cli`'s `auth` and `doctor` commands. The tool supports both interactive and script modes through config (backed by a setup wizard):
https://pnp.github.io/cli-microsoft365/cmd/cli/cli-doctor/
Similarly, first party MCPs may run but be invisible to Cowork. Show it its own logs and it says "OK, yes, that works but I still can't see it, maybe just copy and paste your context for now." A doctor tool could send the user to a help page or tell them how to reinstall.
Minimal diagnostics for managed machines — running without local admin but able to be elevated if needed — would go a long way for the SMBs that want to deploy this responsibly.
Maybe a resync perms button or Settings or Help Menu item that calls cowork's own doctor cli when invoked?
---
* When given IDs, the connector can read anything the user can anyway. We're able to do everything we need, just had to ship ID signposts in our skill plugin that taps your connector. Preferred that hack over a third party MCP or CLI, thanks to the responsibility you look to be iteratively improving.
Dead Comment
Sorry for the ask here, but unaware of other avenues of support as the tickets on the Claude Code repo keep getting closed, as it is not a CC issue.
https://github.com/anthropics/claude-code/issues/26457https:...
Deleted Comment
Dead Comment
In a similar fashion, Apple Podcasts app decided to download 120GB of podcasts for random reason and never deleted them. It even showed up as "System Data" and made me look for external drive solutions.
I use my MacBook for a mix of dev work and music production and between docker, music libraries, update caches and the like it’s not weird for me to have to go for a fresh install once every year or two.
Once that gets filled up, it’s pretty much impossible to understand where the giant block of memory is.
I downloaded several MacOS installers, not for the MacBook I use, but intending to use them to create a partitioned USB installer (they were for macOS versions that I could clearly not even use for my current MacBook). Then, after creating the USB, since I was short of space, I deleted the installers, including from the trash.
Weirdly, I did not reclaim any space; I wondered why. After scratching my head for a while, I asked an LLM, which directed me to check the system snapshots. I had previously disabled time machine backup and snapshots, and yet I saw these huge system snapshots containing the files I had deleted, and kicker was, there was no way to delete them!
Again I scratched my head for a while for a solution other than wiping the MacBook and re-installing MacOS, and then I had the idea to just restart. Lo and behold, the snapshots were gone after restarting. I was relieved, but also pretty pissed off at Apple.
You can enable "calculate all sizes" in Finder with Cmd+J. I think it only works in list view however.
I should not have to hack through /Libary files to regain data on a TB drive because Osx wanted to put 200gbs of crap there in an opaque manner and not give the user ANY direct way to regain their space.
Your friend is called ncdu and can be used as follows:
The exclude for Volumes is necessary because otherwise ncdu ends up in an infinite loop - "/Volumes/Macintosh\ HD/Volumes/" can be repeated ad nauseam and ncdu's -x flag doesn't catch that for whatever reason.One would think that's a extremely common use case and it will only grow the more years iMessage exists. Just offload them to the cloud, charge me for it if you want but every other free message service that exists has no problem doing that.
Edit: investigating a bit more, apparently the lack of a sudo-equivalent, an "elevate this one process temporarily" command is intentional in the design, so that malicious scripts can't take advantage of that "this is really the user" approval path. I can't say I agree with that design decision, but at least it's an ethos.
Spotify just gave up while trying to show me my podcasts. I can't listen to anything not already downloaded right now.
Yet at 3am I'll be able to download a 100GB LLM without difficulty onto the same device that can't stream a podcast right now.
Unfortunately I don't think 5G is the streaming panacea you have in mind. Maybe one day...
I also prompt warp/gemini cli to identify unnecessary cache and similar data and delete them
That's one way to drive sales for higher priced SSDs in Apple products. I'm pretty sure that that sort of move shows up as a real blip on Apple's books.
Deleted Comment
Deleted Comment
Use whatever you like but I don't think Podcast app users are rare by any stretch of the imagination.
https://developer.hashicorp.com/vagrant is still a thing.
The market for Cowork is normals, getting to tap into a executive assistant who can code. Pros are running their consumer "claws" on a separate Mac Mini. Normals aren't going to do that, and offices aren't going to provision two machines to everyone.
The VM is an obvious answer for this early stage of scaled-up research into collaborative computing.
https://exe.dev
https://sprites.dev
https://shellbox.dev
Some reasons aren't even optional. Small but regulated entities exist, and most "Team" sized businesses aren't in Google apps or "the cloud" as they think about it, but are in M365, and do pay for cyber insurance.
Cowork with skills plugins that leverage Python or bash is a remarkably enabling framework given how straightforward it is. A skill engineer can sit with an individual contributor domain expert, conversationally decompose the expert's toil into skills and subcommands, iterate a few times, and like magic the IC gets hours back a day.
Cowork is Agents-On-Rails™ for LLM apps, like Rails was to PHP for web apps.
The VM makes that anti-fragile.
For any SaaS builders reading this: by far most white collar small business work is in Microsoft Office. The scarce "Continue with Microsoft" OIDC reaches more potential SMB desks than the ubiquitous "Continue with Google" and you don't have to learn the legacy SAML dance.
Anthropic seems to understand this. It's refreshing when a firm discovers how to cater to the 25–150 seat market. There's an uncanny valley between early adopters and enterprise contracts, but the world runs on SMBs.
Sign them all up!
On macOS, Lima has been a godsend. I have Claude Code in an image, and I just mount the directory I want the VM to have access to. It works flawlessly and has been a replacement for Vagrant for me for some time. Though, I owe a lot to Vagrant. It was a lifesaver for me back in the day.
However, I’m also curious about using NixOS for dev environments. I think there’s untapped potential there.
containers contain stuff the way an open bookcase contains books, they're just namespaces and cgroups on a file system overlay, more or less, held together by willpower not boundaries:
https://jvns.ca/blog/2016/10/10/what-even-is-a-container/
https://github.com/p8952/bocker
as a firm required to care about infosec, we appreciate the stance in their (2). and MacOS VMs are so fast now, they might as well be containers except, you know, they work. (if not fast, that should be fixed.)
that said, yes, running local minikube and the like remain incredibly useful for mocking container envs where the whole environment is inside a machine(s) boundary. containers are _almost_ as awesome as bookcases…
AI really give much user ability to develop a completed product, but the quality is decreasing. Professional developers will be in demand when the products/features become popular.
First batch of users of new products need to take more responsibility to test the product like a rats in lab
Looking at the amount of issues, outages and rookie mistakes the employees are making leads me to believe that most of them are below junior level.
If anyone were to re-interview everyone at Anthropic for their own roles with their own interview questions, I would guess that >75% of them would not pass their own interviews.
The only team the would pass them are the Bun team and some other of the recently acquired startups.
Deleted Comment
Deleted Comment
Deleted Comment
I also noticed this 10GB VM from CoWork. And was also surprised at just how much space various things seem to use for no particular reason. There doesn't seem to be any sort of cleanup process in most apps that actually slims down their storage, judging by all the cruft.
Even Xcode. The command line tools installs and keeps around SDKs for a bunch of different OS's, even though I haven't launched Xcode in months. Or it keeps a copy of the iOS simulator even though I haven't launched one in over a year.
Not a new problem, unfortunately. DevCleaner is commonly used to keep it under control: https://github.com/vashpan/xcode-dev-cleaner
Out of curiosity, why are you running Cowork inside a VM in the first place? What does that get you that letting Cowork use its own VM wouldn’t?