We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.
This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.
What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.
I put them behind meta on the evilness meter but i think google is less evil which speaks volumes.
The only side of ms that i have any love for is xbox but that is also waning with all the studio acquisitions.
My ranking from most evil to least would be:
1. Google
2. Meta
3. Microsoft
4. Amazon
5. Apple
6. Netflix
Remote: Yes (or hybrid)
Willing to relocate: Tentatively
Technologies: Python, Flask, FastAPI, Java, Spring MVC, SpringBoot, NodeJs, Postgres, MySql, Redis, Kafka, Kubernetes, Celery, AWS.
Resume: https://docs.google.com/document/d/12kjtGlJh3JpA8-HtXfBNT7CyZ9pclHfKYHja4eMr4-A/edit?usp=sharing
Email:
alanjponte@gmail.comLinkedIn: https://www.linkedin.com/in/alan-jason-ponte/
My name is Alan, and I have over a decade of experience building software professionally. Over the past 3 years, I've assumed the role of Tech Lead/Architect for a SaaS startup where I've had the opportunity to lead the technical direction of the team, engage with product design, and mentor junior engineers.
Before working as a Software Engineer, I spent a few years as Computer Systems Engineer for a National Laboratory. I was afforded the opportunity to work on a variety of systems with scientists and engineers of various disciplines.
I'm looking to work on interesting products at any level (IC, Tech Lead, Fractional CTO, Technical Advisor). If you'd like to have a quick chat, feel free to reach out!
I feel like there are two challenges causing this. One is that it's difficult to get good data on how long the same person in the same context would have taken to do a task without AI vs with. The other is that it's tempting to time an AI with metrics like how long until the PR was opened or merged. But the AI workflow fundamentally shifts engineering hours so that a greater percentage of time is spent on refactoring, testing, and resolving issues later in the process, including after the code was initially approved and merged. I can see how it's easy for a developer to report that AI completed a task quickly because the PR was opened quickly, discounting the amount of future work that the PR created.
Side note: I don't see a license anywhere, so technically it isn't open source.
1) Is the ultimate form of this technology ethically distinguishable from a slave?
2) Is there an ethical difference between bioengineering an actual human brain for computing purposes, versus constructing a digital version that is functionally identical?
Perhaps it depends what software one is using
For example, commandline search and tarball/zipball retrieval from the website, e.g., github.com, raw.githubusercontent.com and codeload.github.com, are not slow for me, certainly not any slower than Gitlab
I do not use a browser nor do I use the git software