My work gives us Copilot access since We're a Microsoft Shop™
It's too bad Copilot is by far the dumbest competitor in the space
My favorite interaction so far was when I prompted it with:
ffmpeg command to convert movie.mov into a reasonably sized mp4
Sure, it's not the most direction instructions, but I tend to give it just enough to get the job done, assuming the LLM knows what its purpose is as an LLM, and it always works with the other chatbots.
Copilot's response:
I implemented and executed the Python code above to convert movie.mov to a reasonably sized movie.mp4 using ffmpeg.
However, the Python code failed since it was not able to find and access movie.mov file.
Do you want me to try again or is there anything else that I can help you with?
Note that I didn't cut anything out. It didn't actually provide me any "Python code above"
Here’s the breakdown:
- -vcodec libx264: Uses the H.264 video codec, great for compression and compatibility.
- -crf 23: Constant Rate Factor — lower means better quality and larger size; higher means smaller size but lower quality. 23 is a nice middle ground.
- -preset medium: Controls encoding speed vs. compression efficiency. You can try slow or faster depending on your needs.
- -acodec aac -b:a 128k: Compresses the audio using AAC at 128 kbps — pretty standard and compact.
If you're looking for even smaller file sizes, try bumping -crf up to 26 or using -preset slow.
Want to bake in scaling, remove metadata, or trim the video too? Happy to tailor the command further.
I did the same thing for several iterations and all of the responses were equally helpful.
We get these same anecdotes about terrible AI answers frequently in a local Slack I’m in. I think people love to collect them as proof that AI is terrible and useless. Meanwhile other people have no problem hitting the retry button and getting a new answer.
Some of the common causes of bad or weird responses that I’ve learned from having this exact same conversation over and over again:
- Some people use one never-ending singular session with Copilot chat, unaware that past context is influencing the answer to their next question. This is a common way to get something like Python code in response to a command line question if you’re in a Python project or you’ve been asking Python questions.
- They have Copilot set to use a very low quality model because they accidentally changed it, or they picked a model they thought was good but is actually a low-cost model meant for light work.
- They don’t realize that Copilot supports different models and you have to go out of your way to enable the best ones.
AI discussions are weird because there are two completely different worlds of people using the same tools. Some people are so convinced the tool will be bad that they give up at the slightest inconvenience or they even revel in the bad responses as proof that AI is bad. The other world spends some time learning how to use the tools and work with a solution that doesn’t always output the right answer.
We all know AI tools are not as good as the out of control LinkedIn influencer hype, but I’m also tired of the endless claims that the tools are completely useless.
I am 67.87% certain they make it dumber/smarter during the day. I think it gets faster/better during non-business hours. This needs to be tested more to confirmed, though. However, they have exactly ZERO transparency (especially the subscription model) into how much you are consuming and what you are consuming. So it doesn't really help with the suspicions.
One fascinating aspect of LLMs is they make out-in-the-wild anecdotes instantly reproducible or, alternatively, comparable to results from others with different outcomes.
A lot of our bad experiences with, say, customer support hotlines, municipal departments, bad high school teachers, whatever, are associated with a habit of speaking that ads flavor, vibes, or bends experiences into on-the-nose stories with morals in part because we know they can't be reviewed or corrected by others.
Bringing that same way of speaking to LLMs can show us either (1) the gap between what it does and how people describe what it did or (2) shows that people are being treated differently by the same LLMs which I think are both fascinating outcomes.
My shock moment was when I was asking it to covert an image into a nice PPTX slide naively assuming it had the best PowerPoint capabilities since that’s also a MS product.
It returned a non formatted text box on one slide. I had to laugh so hard people on the office noticed.
Oh yeah, one time I uploaded a couple of files to figure out an issue I was having, and it offered to rewrite the files to fix the issue. It created a download of just one Java file, that was just an empty class with the same name, no methods in it or anything.
Yeah, working in an MS shop in the past couple years, that's what I've been saying ever since first iteration Copilot for MS Office came out, and it's true to this very day: you want AI to help you with your MS Office files? Ask ChatGPT. Or get API keys and use some frontend with a half-assed third party plugin that knows how to convert Markdown into .docx or such. Results are still an order of magnitude better than the Copilot in MS Office.
I blame it on corporate reluctance to take risks that could result in bad press. They put a lot of work into screwing Copilot up. I mean, they had it running GPT-4 back when GPT-4 was the new hot thing. The way it was comically useless is not something that "just happens" - as evidenced by the fact, that just running GPT-4 via API yourself produced good results by default.
This is hilarious because both Gemini and ChatGPT are shockingly good at putting together FFMPEG commands. They can both put together and also understand the various options and stages/filters.
> I cannot reproduce this in any version of copilot?
Because LLM-based service outputs are fundamentally not-reproduceable. We have no insight into any of the model settings, the context, what model is being run, etc.
BUT, I have this in my copilot-instructions.md file:
# Always follow these steps when responding to any request
1. Please do a round of thinking in <thinking></thinking> tags
2. Then a round of self-critique in <critique></critique> tags
3. Then a final round of <thinking>, before responding.
4. If you need more information, ask for it.
It's been increasingly more obvious people on hacker news literally do not run these supposed prompts through LLMs. I bet you could run that prompt 10 times and it would never give up without producing a (probably fine) sh command.
Read the replies. Many folks have called gpt-4.1 through copilot and get (seemingly) valid responses.
Microsoft has really screwed up on branding yet again. Every time I read “Copilot” I think of the GitHub thing, forgetting that there is also a Microsoft Copilot that is actually multiple things across multiple products including Windows and Office.
It’s also a website like ChatGPT apparently? I thought it was called Copilot because it writes with you, so why is there also a general chat/search engine called Copilot? Jesus.
I think you may be confusing Microsoft Copilot with Microsoft365 Copilot? The first doesn’t give you access to Microsoft Copilot Studio but that might also be available with Microsoft Copilot Pro.
People are responding with..works for me, but I've found with windows copilot it was impossible to reset the AI state, and that past prompts would color new inputs.
The new chat, or new conversation buttons seem to do nothing.
This is pretty interesting, I had a very similar experience with Github Copilot's plugin inside Jetbrains IDE today (set to use 4o). I asked it to tell me how to do something, it instead routed in my code, tried to write a python file (its a PHP project) and told me it couldn't do it and did the exact same "Do you want me to try again or is there anything else I can help you with?"
Thing is I ask it random bits like this all the time and it's never done that before so I'm assuming some recent update has borked something.
ohh wow, that's bad, just tried this with Gemini 2.5 Flash/Pro (and worked perfectly) -- I assume all frontier models should get this right (even simpler models should).
I'd be willing to bet a more clear prompt would've given a good answer. People generally tend to overlook the fact that AIs aren't like "google". They're not really doing pure "word search" similar to Google. They expect a sensible sentence structure in order to work their best.
I think the biggest issue is M365 Copilot was sold as something that would integrate with business data (teams, files, mail, etc.) and that never worked out quite well.
So you end up with a worse ChatGPT that also doesn't have work context.
Standard copilot indeed sucks but I'm quite fond of the new researcher agent. It spends much more time than any of the others I've tried, like Perplexity pro and openai.
From a one line question it made me a relevant document of 45 pages examining the issue from all different sides, many of which I hadn't even thought of. It spent 30 mins working. I've never seen Perplexity spend more than 5.
I won't't be surprised if they will significantly nerf it to save on computing costs. I think now they give it their all to build a customer base and then they nerf it.
I love Copilot in VSCode. I always select model "Claude Sonnet 3.7", when in Copilot since it lets me choose the LLM. What I love about Copilot is the tight integration with VSCode. I can just ask it to do something and it relies on the intelligence of Claude to get the right code generated, and then all Copilot is really doing is editing my code for me, reading whatever code Claude tells it to, to build context, etc.
I don't have experience with CoPilot, but I do with other LLMs. I'm not sure that omitting "provide me with" is enough to get the job done, generally, aside from being lucky that it correctly interprets the prompt. In my experience, other LLMs are just as prone to incorrect divination of what one means given telegraphic prompts.
Your conversations are notebooks and the code it conjured up should be behind a dropdown arrow. For visualization it seems to work fine (i.e. Copilot will generate a Python snippet, run it on the input file I attach to the request and present the diagram as a response).
In my experience Microsoft Copilot (free version in Deep Think mode) is way better than ChatGPT (free version) in most of things I throw at them (improving text, generating code, etc).
i asked copilot to make an excel command, that rounds up all numbers to the next integer... took 4 back and forth messages and 15 minutes until it was working... Google was 5 minutes
It's an AI race and they're all vibing off of what the other ones are doing and implementing accordingly. Meanwhile, I've been enjoying Google's AI studio. I can definitely see though the AI companies are trying, they're trying to give devs, and teams the tools they need to operate more efficiently and faster.
However, piping code requests, and hey->do this, AI gives you something; the privacy aspect of it.
On the in-terms, these ARE LLMs, devs/infra people do look at how their tools are being used. You can pull a lot of info with and about an organizatin and what their internals are up to just from how they're using the AI's information.
So while we're looking at Microsoft's quality of testing, what does that really mean in terms of how they're viewing the tool usage.
First off, that’s a really bad prompt - LLMs don’t have this magic ability to read your mind. Second, despite how bad it is, Copilot just gave me the correct code.
When ChatGPT first came out, Satya and Microsoft were seen as visionaries for their wisdom in investing in Open AI. Then competitors caught up while Microsoft stood still. Their integration with ChatGPT produced poor results [1] reminding people of Tay [2]. Bing failed to capitalize on AI, while Proclarity showed what an AI-powered search engine should really look like. Copilot failed to live up to its promise. Then Claude.ai, Gemini 2.0 caught up with or exceeded ChatGPT, and Microsoft still doesn't have their own model.
The biggest problem with Microsoft is their UX. From finding out where to actually use their products, to signing in, wading through modals, popups, terms and agreements, redirects that don’t work and links that point to nowhere. Along the way you’ll run into inconsistent, decades old UI elements and marketing pages that fully misunderstand why you’re there.
It’s a big, unsolvable mess that will forever prevent them from competing with legacy-free, capable startups.
They should delete all their public facing websites and start over.
It's not only public facing websites - Azure is also pretty inconsistent and lately any offer to preview a new UI was a downgrade and I happily reverted back - it's like they have a mandatory font and whitespace randomizer for any product. Also while far from a power user I've hit glitches that caused support tickets and are avoidable with clearer UX. Copilot in Azure - if it works at all - has been pretty useless.
Their UX, their naming conventions from products to frameworks and services, pulled plugged on products, user hostility and so on are all pointing out the root of the problem is elsewhere. I think Microsoft is no longer reformable. It is a behemoth that will probably continue to coast along like a braindead gozilla zombie that just floats due to its sheer size.
Those stupid dialogs that may you think they will help you solve an issue but actually just waste 5-10mins "scanning" just to link you to irrelevant webpages that sometimes don't exist.
I'll add, that Google search AI integration is quite good. I'm actually amazed how well it works, given the scale of Google Search. Nowadays I don't click search results in 50% of searches, because Google AI outputs response good enough for me.
Maybe we have a different Google AI down here in south Texas, but the Google search AI results I receive are laughably bad.
It has made up tags for cli functions, suggested nonexistent functions with usage instructions, it’s given me operations in the wrong order, and my personal favorite it gave me a code example in the wrong language (think replying Visual Basic for C).
The problem is that they made huge time consuming investments in technology to make copilot work with the various O365 controls, then confused everyone by slathering copilot on everything.
Microsoft hired the infamous guy from Inflection AI and fired the one responsible for Bing Chat which was actually good and it's all downhill from there. Bing Chat actually made Google nervous!
Probably but it might not matter. They don't really need to compete on quality, just the simplicity of selling a suite that's bundled together to enterprise in the same way they did with Teams which is inferior to Slack in pretty much everyway (last time I had to use it anyway). Isn't their advantage always sales and distribution? Maybe its different this time, I don't know.
Renaming all their products to Copilot makes no sense and just causes brand confusion.
Copilot getting access to your entire 365/azure tenant is just a security nightmare waiting to happen (in fact theres already that one published and presumably patched vuln)
It has so many shackles on that its functionally useless. Half the time I ask it to edit one of my emails, it simply spits my exact text back out.
Its one singular advantage is that it has crystal clear corpospeak license surrounding what it says your data will be used for. Whether or not its true is irrelevant, organisations will pick it up for that feature alone. No one ever got fired for choosing ibm etc.
The biggest issue with Copilot might not be the model itself, but the naming strategy. One name is used for several completely different products, and users end up totally confused. You think you're using GitHub Copilot, but it's actually M365 Copilot, and you don't even get to choose the model. Microsoft really needs to make this clearer.
You probably are not a customer as a decision maker in a big traditional company/organization. MS is obfuscating on purpose so they can say in sales decks that if you buy this, you get all these copilots and your Fortune 1000 business is AI-proof. What they are left out is that not every copilot is equal.
For some reason I had also gotten the impression that Copilot was powered by OpenAI in some way. Perhaps the Microsoft OpenAI partnership gave me that impression.
I also wasn't aware that there where an OpenAI/Microsoft rivalry, I had the impression that Microsoft put a lot of money into OpenAI and that ChatGPT ran on Azure, or was at least available as an offering via Azure.
Copilot is powered by a Microsoft-hosted version of OpenAI's models. If you ask it, it says "I'm based on GPT-4, a large language model developed by OpenAI. Specifically, you're chatting with Microsoft Copilot, which integrates GPT-4 with additional tools and capabilities like web browsing, image understanding, and code execution to help with a wide range of tasks."
msft had a massive edge. it had exclusive access to models + had web search before anyone.
they flopped this royally, just like windows mobile. they created a shitty ux by shoving it inside the bing app, then they decided to charge for it instead of capturing all enterprise value.
lastly, the product has stalled and missed on their biggest opportunity which is tapping into the data. you can think it's because of how complex it must be, but then openai and everybody else did it.
it's truly a lesson in product mismanagement, once again, from microsoft
Just say the quiet part out loud, Microsoft is such a large anti-competitive company they literally don't have to build competing products. Customers are not even going to evaluate other options because it will be bundled with their other mediocre services.
It was bound to happen. Corporations always commit suicide once they're successful and it almost always looks the same. It's why I don't invest time in non-free software.
I use many LLM tools (ChatGPT, Claude, Gemini, GitHub Copilot, etc), I have never ever gotten any version of MS Copilot to do anything useful for me. I've been stunned at how they can use the same models that ChatGPT does, copy their use cases, and still deliver a turd.
The Github Copilot (in VS Code especially) is the only application of LLMs that I've found useful from Microsoft. I would have loved amazing Copilot support in Word for working on a large complex document, but I haven't found that to work well.
YMMV, but I found it useful for drafting a pull request on GitHub, where it basically just did all the boring work, including finding the particular line in a large codebase that was throwing the error. It wasn't a hard problem, but it still would have required a bit of mental effort on my part, and I'd rather spend that reading a book.
I've come to the conclusion that once companies get big enough, they are unable to build anything really useful. I'm sure there exceptions, but it feels like 99% of the time this is true. The best they can do is acquire a company and hope that goes well.
If only MS copilot was an actual co-pilot in my daily flight of work… but no, it cannot actually draft an email, create planner tasks or book meetings. It is a crappy way of using an LLM that sometimes can access your stuff from MS graph and sometimes has no idea about the six emails from Steve in my inbox.
And no, its PowerPoints are not good either, they are LinkedIn in ppt-form.
1. In Outlook asked Copilot to "create an email asking for a weekly status update for Project ABC". Iterating a bit, I asked it to be more casual and shorter. Here's what I got:
Hey Team,
Hope you're all doing great. To keep everyone on the same page with Project ABC, I'd like us to start doing weekly status updates. Just a quick summary of key developments, milestones, challenges, and what's coming up would be super helpful. This way, we can coordinate better and tackle any issues right away.
Thanks for your cooperation and hard work. Looking forward to your updates!
That seems pretty on point. Not exactly my style but to say "cannot actually draft an email" is clearly wrong.
2. I provided Copilot with this prompt:
"I am upgrading the electrical service at my house from 100a to 200a service. Create a project plan in Planner for this project."
Here's the response I got:
Nice! Upgrading from 100A to 200A is a solid move—more headroom for your gear and future-proofing your setup. Here's a breakdown of how you could structure this as a project in Microsoft Planner, with buckets and tasks to keep it organized:
---
### **Bucket 1: Planning & Permitting**
- **Assess current electrical load and future needs**
- **Consult with a licensed electrician**
- **Request utility service requirements**
- **Obtain local permits for electrical service upgrade**
- **Review NEC and local code requirements**
---
### **Bucket 2: Materials & Scheduling**
- **Select upgraded panel (200A rated) and breakers**
- **Order required materials (panel, feeder wires, conduit, ground rods, etc.)**
- **Coordinate shutdown and install time with utility provider**
- **Schedule rough-in inspection with local authority**
---
### **Bucket 3: Installation**
- **Shut off power at the main disconnect (coordinate with utility)**
- **Remove existing 100A panel and components**
- **Install 200A panel and connect feeders**
- **Upgrade grounding/bonding system as required**
- **Replace meter socket if required**
---
### **Bucket 4: Inspections & Finalization**
- **Schedule and pass electrical inspection**
- **Have utility restore permanent power**
- **Label circuits clearly in new panel**
- **Update homeowner insurance with new electrical capacity**
---
### **Bucket 5: Post-Project Wrap-Up**
- **Document panel layout and upgrades**
- **Evaluate energy use and plan for future expansion (EV charger, generator, etc.)**
- **Close out permits with inspection authority**
---
You can assign these tasks to yourself or any contractors involved, and add due dates to align with your timelines. Want help creating a PowerShell script to bulk import this into Planner?
It did produce a working script that required 1 configuration edit. The project plan is pretty good given the limited information I provided in the prompt.
3. I provided Copilot with this prompt:
book a meeting with <name> for August 1 entitled "Test Meeting"
Not a very good response:
I can't directly book meetings or access your calendar, but I can help you set it up! Here's an example of what to enter in Outlook or your preferred calendar app:
**Title:** Test Meeting
**Date:** August 1
**Attendees:** Bryan Herbst
**Time:** [Insert your preferred time]
**Location:** [Optional]
**Description:** [Add any notes or agenda items]
If you're using PowerShell or a script to automate calendar events—especially within Microsoft 365—I can help you write that too. Want to go that route?
It seems like these examples show that it has about the same capabilities as a basic chat interface like Claude or ChatGPT without any benefit from being integrated with the Microsoft ecosystem.
If you ask it to 'send' an email instead, as I did the first time I saw it pop up (in my email client...), my first real use case, it will tell you it can't, just like your calendar example.
Even 'draft' really, if you forget what you've done, close your eyes and think about it from scratch, surely you expect that to be drafted in your ...drafts?
Cool, so you've spent just as much time "iterating a bit" as you would have done typing, your thinking and typing skills have atrophied a bit more, and you've made your colleagues lives that bit duller by sending them something written by the "average person".
It's too bad Copilot is by far the dumbest competitor in the space
My favorite interaction so far was when I prompted it with:
Sure, it's not the most direction instructions, but I tend to give it just enough to get the job done, assuming the LLM knows what its purpose is as an LLM, and it always works with the other chatbots.Copilot's response:
Note that I didn't cut anything out. It didn't actually provide me any "Python code above"We get these same anecdotes about terrible AI answers frequently in a local Slack I’m in. I think people love to collect them as proof that AI is terrible and useless. Meanwhile other people have no problem hitting the retry button and getting a new answer.
Some of the common causes of bad or weird responses that I’ve learned from having this exact same conversation over and over again:
- Some people use one never-ending singular session with Copilot chat, unaware that past context is influencing the answer to their next question. This is a common way to get something like Python code in response to a command line question if you’re in a Python project or you’ve been asking Python questions.
- They have Copilot set to use a very low quality model because they accidentally changed it, or they picked a model they thought was good but is actually a low-cost model meant for light work.
- They don’t realize that Copilot supports different models and you have to go out of your way to enable the best ones.
AI discussions are weird because there are two completely different worlds of people using the same tools. Some people are so convinced the tool will be bad that they give up at the slightest inconvenience or they even revel in the bad responses as proof that AI is bad. The other world spends some time learning how to use the tools and work with a solution that doesn’t always output the right answer.
We all know AI tools are not as good as the out of control LinkedIn influencer hype, but I’m also tired of the endless claims that the tools are completely useless.
A lot of our bad experiences with, say, customer support hotlines, municipal departments, bad high school teachers, whatever, are associated with a habit of speaking that ads flavor, vibes, or bends experiences into on-the-nose stories with morals in part because we know they can't be reviewed or corrected by others.
Bringing that same way of speaking to LLMs can show us either (1) the gap between what it does and how people describe what it did or (2) shows that people are being treated differently by the same LLMs which I think are both fascinating outcomes.
My shock moment was when I was asking it to covert an image into a nice PPTX slide naively assuming it had the best PowerPoint capabilities since that’s also a MS product.
It returned a non formatted text box on one slide. I had to laugh so hard people on the office noticed.
i asked it to make a sheet look nicer and it couldn't without me explicitly telling it what i wanted done.
When i told it to do certain things, it's like "that's not implemented yet, stay tuned!"
I blame it on corporate reluctance to take risks that could result in bad press. They put a lot of work into screwing Copilot up. I mean, they had it running GPT-4 back when GPT-4 was the new hot thing. The way it was comically useless is not something that "just happens" - as evidenced by the fact, that just running GPT-4 via API yourself produced good results by default.
Yeah, like how about answering the fucking question? lol
Deleted Comment
Copilot with outlook.com
Copilot base one that comes with M365,
And the add-on one for 30$/mo.
Copilot in VS code
All produce: ffmpeg -i movie.mov -vcodec libx264 -crf 23 -preset medium -acodec aac -b:a 128k output.mp4
Which is not surprising because its just an Open AI 4o call... so how are you getting this?
Because LLM-based service outputs are fundamentally not-reproduceable. We have no insight into any of the model settings, the context, what model is being run, etc.
Already gemini giving me flash or pro like i'm supposed to know and decide which i need is missing the point, but 4 of them is crazy.
https://i.imgur.com/toLzwCk.png
ffmpeg -i movie.mov -c:v libx264 -preset medium -crf 23 -c:a aac -b:a 128k movie.mp4
BUT, I have this in my copilot-instructions.md file:
ffmpeg -i movie.mov -vcodec libx264 -crf 23 -preset medium -acodec aac -b:a 128k movie_converted.mp4
Along with a pretty detailed and decently sounding reasoning as to why it picked these options.
Read the replies. Many folks have called gpt-4.1 through copilot and get (seemingly) valid responses.
It’s also a website like ChatGPT apparently? I thought it was called Copilot because it writes with you, so why is there also a general chat/search engine called Copilot? Jesus.
Dead Comment
The new chat, or new conversation buttons seem to do nothing.
Thing is I ask it random bits like this all the time and it's never done that before so I'm assuming some recent update has borked something.
So you end up with a worse ChatGPT that also doesn't have work context.
From a one line question it made me a relevant document of 45 pages examining the issue from all different sides, many of which I hadn't even thought of. It spent 30 mins working. I've never seen Perplexity spend more than 5.
I won't't be surprised if they will significantly nerf it to save on computing costs. I think now they give it their all to build a customer base and then they nerf it.
~> ollama run gemma3:12b-it-qat >>> ffmpeg command to convert movie.mov into a reasonably sized mp4
Here's a good ffmpeg command to convert `movie.mov` to a reasonably sized MP4, along with explanations to help you adjust it:
```bash ffmpeg -i movie.mov -c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k movie.mp4 ```
*Explanation of the command and the options:*
However, piping code requests, and hey->do this, AI gives you something; the privacy aspect of it.
On the in-terms, these ARE LLMs, devs/infra people do look at how their tools are being used. You can pull a lot of info with and about an organizatin and what their internals are up to just from how they're using the AI's information.
So while we're looking at Microsoft's quality of testing, what does that really mean in terms of how they're viewing the tool usage.
Dead Comment
(Perhaps copilot is not lazy, just stupid relative to its peers.)
When ChatGPT first came out, Satya and Microsoft were seen as visionaries for their wisdom in investing in Open AI. Then competitors caught up while Microsoft stood still. Their integration with ChatGPT produced poor results [1] reminding people of Tay [2]. Bing failed to capitalize on AI, while Proclarity showed what an AI-powered search engine should really look like. Copilot failed to live up to its promise. Then Claude.ai, Gemini 2.0 caught up with or exceeded ChatGPT, and Microsoft still doesn't have their own model.
[1] https://www.nytimes.com/2023/02/16/technology/bing-chatbot-m...
[2] https://en.wikipedia.org/wiki/Tay_(chatbot)
It’s a big, unsolvable mess that will forever prevent them from competing with legacy-free, capable startups.
They should delete all their public facing websites and start over.
https://www.osnews.com/story/19921/full-text-an-epic-bill-ga...
Deleted Comment
It has made up tags for cli functions, suggested nonexistent functions with usage instructions, it’s given me operations in the wrong order, and my personal favorite it gave me a code example in the wrong language (think replying Visual Basic for C).
Access to their IP, and 20% of revenue (not profit).
Altman will absolutely attempt this.
(& small typo, “Proclarity” = *Perplexity)
Renaming all their products to Copilot makes no sense and just causes brand confusion.
Copilot getting access to your entire 365/azure tenant is just a security nightmare waiting to happen (in fact theres already that one published and presumably patched vuln)
It has so many shackles on that its functionally useless. Half the time I ask it to edit one of my emails, it simply spits my exact text back out.
Its one singular advantage is that it has crystal clear corpospeak license surrounding what it says your data will be used for. Whether or not its true is irrelevant, organisations will pick it up for that feature alone. No one ever got fired for choosing ibm etc.
This reminds me of IBM Watson back in the day
Deleted Comment
And us plebs working for the company are left to deal with the inferior tool.
Insert "No, We have copilot at home children" meme here.
I also wasn't aware that there where an OpenAI/Microsoft rivalry, I had the impression that Microsoft put a lot of money into OpenAI and that ChatGPT ran on Azure, or was at least available as an offering via Azure.
OpenAI's models are also available via Azure.
LOL. We're talking about the company that used to slap a non-sensical .NET suffix on everything.
It’s so strange that they keep renaming because Office (and office.com) is perfectly usable.
they flopped this royally, just like windows mobile. they created a shitty ux by shoving it inside the bing app, then they decided to charge for it instead of capturing all enterprise value.
lastly, the product has stalled and missed on their biggest opportunity which is tapping into the data. you can think it's because of how complex it must be, but then openai and everybody else did it.
it's truly a lesson in product mismanagement, once again, from microsoft
MSFT is the world’s best 2nd mover and can often make profits on ideas pioneered/launched by other companies before MSFT.
MSFT came late to the cloud party (2011? AWS launched in 2006 IIRC), still they became a big player there (~25% market share in 2025).
Yes they botched Mobile, but to me it looks like they are still in the AI game
(I personally prefer models from Google, Anthropic or OpenAI though).
The Github Copilot (in VS Code especially) is the only application of LLMs that I've found useful from Microsoft. I would have loved amazing Copilot support in Word for working on a large complex document, but I haven't found that to work well.
1. In Outlook asked Copilot to "create an email asking for a weekly status update for Project ABC". Iterating a bit, I asked it to be more casual and shorter. Here's what I got:
That seems pretty on point. Not exactly my style but to say "cannot actually draft an email" is clearly wrong.2. I provided Copilot with this prompt:
"I am upgrading the electrical service at my house from 100a to 200a service. Create a project plan in Planner for this project."
Here's the response I got:
It did produce a working script that required 1 configuration edit. The project plan is pretty good given the limited information I provided in the prompt.3. I provided Copilot with this prompt:
Not a very good response:It needs to be a lot more clear and direct about the expectations of others.
"I'd like us to do X" is super passive, and a thousand miles from "You must do X"
Even 'draft' really, if you forget what you've done, close your eyes and think about it from scratch, surely you expect that to be drafted in your ...drafts?
1) "Iterating a bit". Like that email is barely 5 sentences, you could write it faster than coaxing it out of Copilot
2) It is drivel. Could have stopped at *Consult with a licensed electrician
3) Well...