Anyone remember "The Grid"? I wrote a post about it a few years ago.[0] Essentially it was a scam where "AI will replace web developers!" and raised a few million in VC. Be very careful of buzz words and companies claiming to replace engineers with AI.
Anyone actually used their builder product ? I signed up but apparently I have to 'invite' 5 or more 'friends' to get early access. Feels like a vapor-ware.
Hey man, I am the CEO at Engineer.ai - it's not vapourware but also not the holy grail - marketing put up a gate just because, well, we want to make sure we deliver a strong experience - given dropbox also had a gate - I think we took a leaf out of their book. As far as customers, we built a number of projects (186 so far through the platform) and this includes the BBC Click Audience App, the in game auction SDK that was for the SF Giants (a startup was our customers) and other clients include Virgin, Future Group and a bunch of other SMEs or the SME in big companies. You can email me s at engineer dot ai and I'll make sure to get it activated so you can see. Here is also a video demo without any marketing blah blah on it - https://youtu.be/dCk66hrlLmM
I used it (a year later than I was expecting) and it was so bad it was worthless. I brought in with low expectations and it didn't even meet them. It was slow, unresponsive and cranked out sites 10x worse than you could do in squarespace or wix if you invested 20 minutes or bought a template
>Engineer.ai’s “Builder” product breaks projects into small ‘building blocks’ of re-usable features that are customized by human engineers all over the world, making the process cheaper than the average process
There doesn't appear to be AI involved. A very good business model, but no AI.
What I expected the founder to say was "we've proven people want our product, now we can scale it even further by building the ai tool we always wanted to build," but I don't see that.
This is the part that got me: "... everyone can build an idea without learning to code"
I went to my first tech conference when I was 13. One of the hot items was a tool that made programmers unnecessary. It was targeted at cheapskate businesspeople. Decades later the company is long dead. But the suckers are still out there. They think that coding is the hard part, in the same way that they think the hard part about building a house is nailing things together. But in both cases, the part you're really paying for is expertise: good development firms and good homebuilders know how to turn hazy human desires into very specific implementations, while also shaping those desires to be reasonable and achievable.
Personally, I don't believe anyone will be able to achieve full code synthesis for a very long time. I'm a firm believer that we'll always need engineers in order to code, product managers to help refine customer requirements, and so on. Having been a developer on the front lines for almost a decade, I've always believed that good software development is more akin to art than an exact science ("good code is like poetry"). That being said, I do think a lot of the stuff we end up doing as part of the SDLC is incredibly repetitive from product to product; boilerplate code, setting up basic architecture, etc.
Those are the areas that we're trying to automate as much as possible so that humans can focus only on the custom bits.
Ideas are a dime a dozen. The hard part is always the asking the right questions and collecting the right data to filter out bad ideas and refine good ideas into practical products.
Here is where we use AI/Expert/Heuristics (pls note I am not the AI expert but trying to be as transparent as possible)
- Pricing is a Supervised Learning Model.
- Custom Features are a Convolutional Neural Net + NLP.
- Resource Allocation (we tap into capacity of other dev shops) is an OR/ML combination.
- Sequencing of what to do is an ML/SL problem.
- Complexity is a Clustering Problem.
- Grading Devs is a Static Code Analysis (industry standard) + NLP Problem.
- Quality Early Warning is a Supervised Learning + Heuristics (we identify early potential problems based on a developer + feature set history analysis)
- Templates being updated based on features being added by onward customers.
---> Building Blocks
- Features are one or many building blocks
- They communicate through an ESB thats allows a smarter way of messaging between individual areas.
- The ESB allows us to "plug n play" -> today it still needs human stitching but that a scale problem we are looking to fix.
We are step 5 out of 12 steps of the way through the final vision - and the above are at varying stages of deployment (some early, some more established).
> "They charge you for every line of code, we bill you for what’s unique."*
What this means to their clients is that if the client hires Engineer.ai to build something, their next client can get the same product for free. Good luck with that, Engineer.ai.
To be clear, the next client doesn't actually get it for free.
The way it works is that we use pre-fabricated components in conjunction with an ESB in order to create what is essentially a base for developers to then fill in with customization and business logic.
Think of it as a set of well structured libraries that can be automatically stitched together (with dependency management and merge conflict handling).
We don't reuse a customer's business logic - that's what makes their app unique. So for every customer we build for, there will always be that human element of customization.
Thanks for weighing in, Rohan. I see what you're trying to do here: you are building an app/website builder and you're leveraging some of the bespoke work you've done in the past to make future work less expensive for future clients.
That's what most agencies do (even IBM, CGI, etc) but you're doing it through an online interface. If I'm not mistaken, you are leveraging economies of scale to make your offerings cheaper than those of smaller agencies, effectively trying to squeeze them out.
My comment was specific to the marketing. You make it seem like I can get a copy of someone else's app if I just want the exact same thing that someone else got, for next to nothing.
I would be astonished if the next client gets it for free. The next client might get it for less than it cost the first client, but Engineer.ai has to pay back those investors.
It was meant as a joke. I'm just pointing out how silly this all sounds. E.ai looks like just another agency with really fancy marketing and an inflated valuation.
How is this AI? It looks like they're just building interfaces and then outsourcing implementations of them. I'm not saying it's not a smart strategy, but what makes it AI?
small correction - we are not an outsourced providers - about 40-60% of the building process is machine operated - the rest is human - and those humans come from our network of devshops - but we don't outsource it to them - we pick the individual engineers we want to work on the problem basis our rating system. (https://snag.gy/XgJfny.jpg) will show you what experience our Capacity Partners see.
Replace "AI" with "software" (which for all intents and purposes it is).
> Software is the centre of every business today and the market has been waiting for a solution that eliminates technical barriers to build software so that everyone can engage in the new economy,” said Manu Gupta, Partner at Lakestar. “By creating a software powered assembly line combined with the best global human talent, Engineer.ai’s Builder bridges the gap between an idea and a software product to enable it.”
"The most valuable businesses of the coming decades will be built by entrepreneurs who seek to empower people rather than try to make them obsolete."
"When a cheap laptop beats the smartest mathematicians at some tasks, but even a supercomputer with 16,000 CPUs can’t beat a child at others, you can tell that humans and computers are not just more or less powerful than each other – they’re categorically different."
"Palantir takes a hybrid approach: the computer would flag the most suspicious transactions on a well designed user interface, and human operators would make the final judgement as to their legitimacy."
This quote, and the kind of thinking that engenders it (and is engendered by it), bother me.
Yeah, a cheap laptop can do arithmetic faster than 'the smartest mathematicians'. And the smartest mathematicians can do arithmetic faster than many children. Does that make mathematicians and children 'categorically different things' as well? In some trivial sense, sure, but it doesn't preclude any kind of connection between the two, or require some deep new ontological commitments to model.
I'm all for the current practical approach of using 'AI technology' as a human supplemental. But I'd rather not frame it as (what I perceive) as some kind of mystic, dualistic argument. At least not until we know more about both.
I also don't really think Peter Thiel is worthy of being the keystone of any kind of argumentum ab auctoritate in this particular field.
You're absolutely correct in that we don't currently use blockchain tech anywhere in our stack yet!
It's something we're currently exploring as a way to augment our business in order to deal with complex problems like identity management, royalty payouts, and escrowed payments. Additionally, today we can only work with developers in devshops; in the future we want to expand into the freelancer market. Dealing with freelancers at scale is a complicated proposition, as it's been super hit and miss with a well structured workflow to manage them. We're hoping we can solve that problem with a mix of automation and blockchain.
- Disclaimer, I am the aforementioned VP Blockchain at Engineer.ai
I'm now seriously considering adding that Prometheus photo to our press kit ;)
Jokes aside, my colleague @sachmans has posted a comment above that should hopefully answer your question on how we use AI. As for how we define AI, as an engineer myself, I'm a little annoyed by how it's become an umbrella term for everything from basic statistical models to ANNs. Unfortunately that is the reality of it - and so we've consciously decided to use it as an umbrella term for the various applications of ML/NLP/NNs that we use internally.
Wouldn't this technically be like the 4th ai bubble? I remember reading these bubbles have been happening since the 60s.
edit: Okay, I'm slightly wrong. I looked it up due to curiosity. This would qualify as the 3rd major ai bubble, but there have been a few minor ones. Also known as "ai winters": https://en.wikipedia.org/wiki/AI_winter
AI winter referring to the lack of funding in ai projects instead of collapse in companies.
Quite hilarious yet quite frustrating for us at how we are marketing ourselves and having to explain all the nonsense going on out there is not really AI?!
Why would you write code using AI? If the AI is able to figure out requirements, why not just let it do it directly, simply speaking?
I always have a hard time to see the benefit of code-generation tools in general. If you can generate the code for a piece of functionality, you may as well abstract away primitives for that and make it a one-liner operation in the code you're writing. If that's not possible, it's probably a shortcoming of the language, framework or whatever system you're using (which is admittedly the case sometimes in the real world, because things evolve slowly).
Code generation can be useful if it produces provably correct code. Any abstraction you write yourself you still have to test carefully. For example it's a lot easier and safer to use something that generates statemachines from descriptions that writing your own statemachine interpreter.
Sounds like just marketing bull. It sounds like they have libraries and they are using them to spin up apps faster. That might have value if the libraries are good but that's clearly not AI.
What if the mech Turks are like the MC simulations used for training the Alpha Go networks?
What if you stuff the Qt docs into a DL model and using the qt source code as training data? Could the network produce usable source code based on docs?
[0] https://medium.com/@seibelj/the-grid-over-promise-under-deli...
There doesn't appear to be AI involved. A very good business model, but no AI.
What I expected the founder to say was "we've proven people want our product, now we can scale it even further by building the ai tool we always wanted to build," but I don't see that.
I went to my first tech conference when I was 13. One of the hot items was a tool that made programmers unnecessary. It was targeted at cheapskate businesspeople. Decades later the company is long dead. But the suckers are still out there. They think that coding is the hard part, in the same way that they think the hard part about building a house is nailing things together. But in both cases, the part you're really paying for is expertise: good development firms and good homebuilders know how to turn hazy human desires into very specific implementations, while also shaping those desires to be reasonable and achievable.
- Disclaimer, I'm a VP E at Engineer.ai
- Pricing is a Supervised Learning Model.
- Custom Features are a Convolutional Neural Net + NLP.
- Resource Allocation (we tap into capacity of other dev shops) is an OR/ML combination.
- Sequencing of what to do is an ML/SL problem.
- Complexity is a Clustering Problem.
- Grading Devs is a Static Code Analysis (industry standard) + NLP Problem.
- Quality Early Warning is a Supervised Learning + Heuristics (we identify early potential problems based on a developer + feature set history analysis)
- Templates being updated based on features being added by onward customers.
---> Building Blocks
- Features are one or many building blocks
- They communicate through an ESB thats allows a smarter way of messaging between individual areas.
- The ESB allows us to "plug n play" -> today it still needs human stitching but that a scale problem we are looking to fix.
We are step 5 out of 12 steps of the way through the final vision - and the above are at varying stages of deployment (some early, some more established).
What this means to their clients is that if the client hires Engineer.ai to build something, their next client can get the same product for free. Good luck with that, Engineer.ai.
* https://www.engineer.ai/how-it-works
- Disclaimer, I'm a VP E at Engineer.ai
That's what most agencies do (even IBM, CGI, etc) but you're doing it through an online interface. If I'm not mistaken, you are leveraging economies of scale to make your offerings cheaper than those of smaller agencies, effectively trying to squeeze them out.
My comment was specific to the marketing. You make it seem like I can get a copy of someone else's app if I just want the exact same thing that someone else got, for next to nothing.
They can do whatever they feel like doing.
they are just an outsorce provider. plain and simple.
> Software is the centre of every business today and the market has been waiting for a solution that eliminates technical barriers to build software so that everyone can engage in the new economy,” said Manu Gupta, Partner at Lakestar. “By creating a software powered assembly line combined with the best global human talent, Engineer.ai’s Builder bridges the gap between an idea and a software product to enable it.”
"When a cheap laptop beats the smartest mathematicians at some tasks, but even a supercomputer with 16,000 CPUs can’t beat a child at others, you can tell that humans and computers are not just more or less powerful than each other – they’re categorically different."
"Palantir takes a hybrid approach: the computer would flag the most suspicious transactions on a well designed user interface, and human operators would make the final judgement as to their legitimacy."
- Peter Thiel, "Zero to One", Chapter 12
Thiel is going long on these AI+human startups.
Yeah, a cheap laptop can do arithmetic faster than 'the smartest mathematicians'. And the smartest mathematicians can do arithmetic faster than many children. Does that make mathematicians and children 'categorically different things' as well? In some trivial sense, sure, but it doesn't preclude any kind of connection between the two, or require some deep new ontological commitments to model.
I'm all for the current practical approach of using 'AI technology' as a human supplemental. But I'd rather not frame it as (what I perceive) as some kind of mystic, dualistic argument. At least not until we know more about both.
I also don't really think Peter Thiel is worthy of being the keystone of any kind of argumentum ab auctoritate in this particular field.
[0]: https://www.engineer.ai/about-us
- Disclaimer, I am the aforementioned VP Blockchain at Engineer.ai
Jokes aside, my colleague @sachmans has posted a comment above that should hopefully answer your question on how we use AI. As for how we define AI, as an engineer myself, I'm a little annoyed by how it's become an umbrella term for everything from basic statistical models to ANNs. Unfortunately that is the reality of it - and so we've consciously decided to use it as an umbrella term for the various applications of ML/NLP/NNs that we use internally.
- Disclaimer, I'm a VP E at Engineer.ai
edit: Okay, I'm slightly wrong. I looked it up due to curiosity. This would qualify as the 3rd major ai bubble, but there have been a few minor ones. Also known as "ai winters": https://en.wikipedia.org/wiki/AI_winter
AI winter referring to the lack of funding in ai projects instead of collapse in companies.
I always have a hard time to see the benefit of code-generation tools in general. If you can generate the code for a piece of functionality, you may as well abstract away primitives for that and make it a one-liner operation in the code you're writing. If that's not possible, it's probably a shortcoming of the language, framework or whatever system you're using (which is admittedly the case sometimes in the real world, because things evolve slowly).
Sounds like just marketing bull. It sounds like they have libraries and they are using them to spin up apps faster. That might have value if the libraries are good but that's clearly not AI.
What if you stuff the Qt docs into a DL model and using the qt source code as training data? Could the network produce usable source code based on docs?