Readit News logoReadit News
linschn · 10 years ago
I read the paper, and read up about the techniques used to do that (because the paper is very light on details). I came back completely underwhelmed.

This makes (clever) use of hundreds, if not thousands, man hours of painstakingly entering expert rules if the form IF <some input value is above or below some threshold> THEN <put some output value in the so and so range>.

The mathematical model of Fuzzy Trees is nice, but this is completely ad-hoc to the specific modelization of the problem, and will fail to generalize to any other problem space.

This kind of techniques has some nice properties (its "reasonings" are understandable and thus kind of debuggable and kind of provable, it smoothes some logic rules that would otherwise naively lead to non smooth control, etc.) but despite the advances presented here that seem to make the computation of the model tractable, I don't see how it could make the actual definition of model anywhere near tractable.

Also, I dislike having to wade though multiple pages of advertising before I can find the (very light) scientific content.

-- Edit: I realize I am very negative here. I do not mean to disparage the work done by the authors. It's just that the way it is presented make it sound way more impressive than it is. It's still interesting and novative work.

JoeAltmaier · 10 years ago
Some rules can be derived from instrumenting humans as they perform the maneuvers, and generalizing from their behavior? We used to instrument motorcycle riders at Harley Davidson and create fuzzy-logic models of expert riders as they performed certain acts on a track (dodging road hazard; emergency stop; hairpin turn). Our goal was also a fuzzy-logic driver model, which they used to help design new motorcycle suspensions/steering that would feel 'natural' to an expert rider e.g. mesh well with the model they had for an expert.
nickbauman · 10 years ago
Wow that's very innovative for HD. Why don't they put some of that innovation effort into their actual drivetrain? I loved my Buell's look and ride, but I mistrusted it's horrid 50-year-old Baker transmission which went on to completely fail at ~6,000 miles, necessitating me to disassemble the entire engine to split the crankcase so I could repair it. After seeing its guts, I no longer wanted it. It ignores a half century of innovation in motorcycle design producing a machine that I vote most likely to unexpectedly leave me on the side of the road.
linschn · 10 years ago
I asked myself the same question, and did read some papers, but could not find a recent comprehensive survey on automatic fuzzy rule generation (I admit I gave up after ~15 minutes).

What I found did not convince me that it would fare better than an off-the-shelf (somewhat) non-interpretable statistical supervised learning algorithm.

It can be a nice way of bootstrapping the rule writing process, or to go the other way : to discover and analyze new expert knowledge by looking at the rules.

But performance-wise, I would go the machine learning way anytime.

Also, Inverse Reinforcement Learning seems to be very promising : one guesses the reward function by observing the expert acting.

jerf · 10 years ago
I'd like to double-click on that comment. :) If you're ever interested in writing more about that, I'd upvote it.
ep103 · 10 years ago
what, what is fuzzy logic model? From parent post, it seems to be a like data learning, but manually?
svalorzen · 10 years ago
I would assume that in a military setting (as it is in most bureocratic/management settings) a solution like this has the immense advantage that one can precisely determine the source of any one error to a specific requirement/rule.

It is very hard to ask for management to trust a system they know nothing about, where they have literally no control over final behaviour, even if in the end it will perform better overall. In a rule-based system, instead, it is always possible to make adjustments and blame mistakes very efficiently to specific causes.

I guess this is the main reason why "true" AI is currently being used mostly in information fields, rather than on physical machines and engineering. No-one would know how to deal with the outcome of a fuzzy learned algorithm making the wrong decision. This is also a reason why autonomous cars are very interesting to me, even though I bet they are still full of ad-hoc rules in order to have a layer of "manageability" over the overall system.

linschn · 10 years ago
I see how it can sound appealing to a bureaucrat, but as a programmer, debugging the concurrent evaluation of thousands of "natural" language IF...THEN... rules until I find the questionable one where a threshold was defined too low or too high sounds like a nightmare.
romaniv · 10 years ago
Comparing comments here to the comments on, say, first Alpha Go post, reveals the amazing amount of AI bias on this website.

When an expert system beats some human in a complex real-life problem the comments are about how it is narrow, boring, not sufficiently tested and ultimately doesn't matter.

When a neural network (with the help of MCTS and an entire data-center full of servers) beats some human in a board game the comments here hype it through the roof and jump to conclusions about the coming dawn AGI.

YeGoblynQueenne · 10 years ago
>> Comparing comments here to the comments on, say, first Alpha Go post, reveals the amazing amount of AI bias on this website.

I think it's because only recently has AI been in the news again and it's been the news thanks to machine learning and neural networks in particular (and more specifically, deep learning). The last time there was a big todo about AI was in the '90s and most people writing here were probably not old enough to figure out what the hell was up back then.

Er, I'm not dating myself here. I know about those things out of happy accident (maybe a story for another time). Most people who graduated from CS in the last five or six years will not have heard anything about expert systems and GOFAI, except that it failed etc, if that.

Then along comes Google and promises it can make your phone talk to you. People are intrigued.

But of course, those who don't know their history are doomed to repeat it.

tomlu · 10 years ago
I'd guess there's a difference in kind that people get excited about.

The fighter jet AI technique is hard-coded to a very specific problem domain, and could only be reproduced in a different problem domain by doing it from scratch.

The technique used by AlphaGo is at least closer to the idea that we can eventually build generically trainable machines that can learn to do a variety of tasks, without having to code them from scratch every time.

quandrum · 10 years ago
For me personally, this is less exciting because AI jet pilots have a natural advantage.

Human pilots are limited in maneuvering by the amount of force their body can take. Computer pilots are limited by the amount of force the airframe can take. The later allows much more aggressive and radical maneuvers.

That means, compared to playing Go, the robot pilot can be comparatively much worse than the human and still win decisively by taking advantage of high g maneuvers.

rbanffy · 10 years ago
I am not very comfortable with a machine that's very competent in killing fighter pilots. I am much less comfortable with such machines generalizing that competency to other, closer, problem spaces. Also, in cases where the use of deadly force happens without a human in the loop, being able to describe exactly which rules triggered and caused the death of a friendly pilot or that C-40 that happened to actually be a 737 full of passengers would be a requirement. "Because the plane got confused" is not very satisfactory.
gene-h · 10 years ago
Fighter jet AIs aren't that scary compared to the work being done on algorithms for teams of robot soldiers. The US's TARDEC and the Australian DSTO held a competition for this back in 2010 called Multi Autonomous Ground-robotic International Challenge(MAGIC)[0]. In this competition, a team of aerial and ground robots had to perform a simulated combat mission to 'secure' a set of moving(other soldiers) and stationary targets(IEDs) in an approximation of an urban environment.

In simulation, the algorithms demonstrated for doing this do quite well with, being able to complete the mission with a success rate of 97.5%, so long as one has 6 search robots and 3 gun robots.

This did not work so well in real life, partially because real robots are difficult to do with. It is still disturbing though because of the high success rate, not to mention the immediate applicability to robot SWAT teams. As a civilian, I'd be much more concerned with a SWAT AI than a fighter jet AI.

However, robot swat teams are still a ways off.

[0]http://singularityhub.com/2010/03/19/teams-of-military-robot... [1]https://en.wikipedia.org/wiki/Multi_Autonomous_Ground-roboti... [2]http://www.frc.ri.cmu.edu/~ssingh/Sanjiv_Singh/PUBS_CONF_fil...

RIMR · 10 years ago
How about a future where AI fighter pilots fight against other AI fighter pilots?

Maybe one day war will be less about killing people, and more of a battle between countries' best engineers.

Maybe I'm just optimistic, but I think robot wars would be a hell of a lot better than real wars.

drzaiusapelord · 10 years ago
>The mathematical model of Fuzzy Trees is nice, but this is completely ad-hoc to the specific modelization of the problem, and will fail to generalize to any other problem space.

Well, why should it? No one is inventing HAL-like AI anytime soon, or ever. If this system does a better job of killing the enemy than human pilots then its quite the breakthrough. Projecting air power is one of the ways countries keep aggressors away and this would be quite an advantage for variety of reasons. Not the least of which means you can now design AI driven fighters that have zero design compromises to keep human pilots alive.

I imagine fighter engagement consists of a fairly limited set of problems to solve. Think of this as just a souped up autopilot/autoland system, except with guns and missiles. We're not asking the AI to write the next Romeo and Juliet here.

>because the paper is very light on details

Defense contractors aren't known for sharing details. I imagine this is a competitive advantage and they want to keep their cards close to their chest. There may even be national security issues here.

linschn · 10 years ago
> Well, why should it?

Because what is trumpeted as a breakthrough may in fact be so narrow in scope that it may not even be possible to use it in a flight combat video game without a lot of work, let alone any real life environment.

It is absolutely, very closely tied to the mathematical model of aerial combat that they devised and can not easily be made to accommodate new insights, or new challenges.

ris · 10 years ago
So, while it might be the fashionable thing to do some kind of (machine/deep/?) learning approach where you allow it to run millions of simulations and figure out things itself, I can understand why they didn't.

Learning approaches which depend on mass-simulation are great when your problem only ever exists in a "virtual" context, but what happens when you want to take your trained neural network out into the real world? Clearly it's going to have to adapt to the differences between the real world and the virtual world - but how would you do that? You can't run millions of dogfights in the real world to adjust its training.

?

argonaut · 10 years ago
This is called domain adaptation and transfer learning in the literature. There are ways to do that. It is an active area of research. Basically the idea is to run a few real world dogfights (you could conceivably collect a few hundreds), and use methods to adapt the simulation model to the new domain. Solutions involving unsupervised learning (e.g. no dogfight, just collect sensor data from fighters - you could collect thousands of hours this way) are also active areas of research.
linschn · 10 years ago
It is not about fashion, it is about not being ad-hoc.

For small scale problem where most of the variables are well understood, this kind of approaches work beautifully. Big problem are better tackled by a more generic approach (maybe with some ad-hoc adaptations, such as mixed approach between expert systems and statistical algorithms, feature engineering, etc.) because these approaches will be more resilient to an exposure to the real world, and the manpower invested in them is useful in more than one problem domain.

To address your last point, there is an extensive body of work on data-scarce environments. I've even seen a talk about applying reinforcement learning to endangered species preservation, where you only get a single digit number of interaction with the system !

Practicality · 10 years ago
The solution would be to make the simulator so good that there is no practical difference.
marcosdumay · 10 years ago
For a start, if it can be done by a fuzzy decision tree, it can be derived by a Bayesian network (that is basically a fuzzy decision tree that keeps extra data for learning) and made more versatile after that.

But the decision tree is much more tractable, thus the longer it's kept on this format, the more future-proof is the work.

Practicality · 10 years ago
Sounds like how Deep Blue defeated Kasparov. The first time is always awkward, but now that we know that it can be done we can develop more generic algorithms. A Stockfish for air combat may be several years away but it's coming.
moheeb · 10 years ago
I'm not sure that an open source AI for air combat would get you very far...depending on the licensing terms.
TheArcane · 10 years ago
That's fuzzy logic for you. Probably why it mostly died around the turn of the century.

It's still used in a few systems as a complementary system involving PIC controls.

YeGoblynQueenne · 10 years ago
For those who read this piece of news and don't understand why there is no mention of machine learning, neural networks and deep learning, that's because the system described is a typical fuzzy logic Expert System, a mainstay of Good, Old-Fashioned AI.

In short, it's a hand-crafted database of rules in a format similar to "IF Condition THEN Action" coupled to an inference procedure (or a few different ones).

That sort of thing is called an "expert system" because it's meant to encode the knowledge of experts. Some machine learning algorithms, particularly Decision Tree learners, were proposed as a way to automate this process of elicitation of expert knowledge and the construction of rules from it.

As to the "fuzzy logic" bit, that's a kind of logic where a fact is true or false by degrees. When a threshold is crossed, a fact becomes true (or false) or a rule "fires" and the system changes state, ish.

It all may sound a bit hairy but it's actually a pretty natural way of constructing knowledge-based systems that must implement complex rules. In fact, any programmer who has ever had to code complex business logic into a program has created a de facto expert system, even if they didn't call it that.

For those with a bit of time in their hand, this is a nice intro:

http://www.inf.fu-berlin.de/lehre/SS09/KI/folien/merritt.pdf

saulrh · 10 years ago
Not actually hand-crafted. If you read the actual paper, they're training their fuzzy system with some kind of genetic algorithm. The theory behind it isn't in this paper, and it seems to be home-grown DIY type stuff - pretty standard for heavily military work like this - but it is still doing some optimization and learning. No idea what it's doing, whether it's just tuning weights or whether it's actually altering the tree itself, but I'd guess that they've basically reinvented decision trees.
YeGoblynQueenne · 10 years ago
>> If you read the actual paper

You're right that I didn't. Thanks for correcting me and apologies for the slight fudging. I think my description is still mostly accurate though.

>> I'd guess that they've basically reinvented decision trees.

That would make sense in the sense that it's kind of an obvious algorithm to re-invent if you're trying to learn propositional rules. I'm not so sure about the "evolutionary" part though.

YeGoblynQueenne · 10 years ago
Also, I should say: I'm really sorry this news had to be about an automated weapon. That sucks.
Retra · 10 years ago
It wouldn't be news if it weren't about a weapon. Or rather, it wouldn't have grabbed your attention.

Deleted Comment

Negative1 · 10 years ago
AI Fighter Pilots have been killing me in Flight Simulations for at least 30 years now using similar systems. From the paper, they basically use an expert system using something they call a Genetic Fuzzy Tree (GFT), which seems suspiciously like a Behavior Tree where the nodes are trained. They trained the GFT then had it go up against itself where Red team was the 'enhanced' AI and Blue was supposed to be the human (this part was odd to me).

After they completed the training they put it up against real veteran pilots and the AI basically did a few things. It would take evasive maneuvers when fired upon and fire when in optimal range. That's pretty much it. And you know what? That's really all modern pilots need to do. It's amazing what they did with Top Gun, making this stuff not look boring. In the end of the day it's just wait for some computer to tell you that you have target lock and press a button. If attacked, take evasive maneuvers and pray. Takeoff and landing on a Carrier is the scariest part.

I'm quite curious how this system would perform in WWII era dogfights where you had to worry about the stress on your plane, had to deal with engines that failed and stalled all the time and maneuvers that were much slower and closer to the enemy (plus no missiles).

Even so, I enjoyed reading the paper (not the article) so would recommend it if you're into Game AI at all.

stcredzero · 10 years ago
It's amazing what they did with Top Gun, making this stuff not look boring.

When airplanes get within gun range ("knife-fighting range") things are very interesting. Beyond visual range is just weapons management, but close-in it's energy management to optimize maneuvering to get to a killing position.

Also, it's much more interesting when you have something at stake besides losing a video game and having to restart.

I'm quite curious how this system would perform in WWII era dogfights where you had to worry about the stress on your plane, had to deal with engines that failed and stalled all the time and maneuvers that were much slower and closer to the enemy (plus no missiles).

By the end of WWII, properly maintained engines flown within parameters were pretty reliable. (So no pulling negative g's in a plane with a carburetor.) The concerns of fighter pilots at very close range or trying to evade when targeted by missiles can still be similar in certain regards to pilots trying to stay alive in WWII.

AI's would probably be very good at energy management and taking shots of opportunity.

empath75 · 10 years ago
Seems to me this gets a lot more interesting when they start building fighter jets without the assumption that a pilot will be in the plane at all. You can build much smaller, lighter planes without the need for life support systems or worrying about g-forces that will kill a human pilot.

I'll grant you that doing this in the US is going to be problematic because of ethical concerns, but there is definitely going to be some country that does it, and as soon as they do, they'll instantly gain air supremacy.

niftich · 10 years ago
Aren't those called missiles or UAVs?

As in, we should consider why we have planes in the first place: it's to deliver some payload (bombs, missiles, or in the olden days, cameras for photography) to a specific place where you make use of that payload and then go home. Once you remove the human, you're not left with too many uses that can't be solved with existing technology.

swsh · 10 years ago
The HiMAT basically demonstrated this. Unfortunately there doesn't seem to be much information online about the program, however some books I've read note that the program was a success.

http://www.boeing.com/history/products/himat-research-vehicl...

Rudimentary fuzzy logic in this kind of platform should be able to defeat a human pilot regardless of the pilots cunning/unpredictability/other human aspects which movies trump as superior human features. But, practical requirements such as range, payload, loiter time, pork barrelling may mean that the platform is otherwise compromised.

Additionally in an era where things such as airborne lasers are becoming a reality, the whole meta may completely change.

alkonaut · 10 years ago
The life-support weight and g-force worrying can be dropped, but I think parameters like wing-loading, range, load capacity, the radar and its power generation will set the size of a fighter plane to be pretty much that of a fighter, even if you drop the pilot.
rwallace · 10 years ago
You're saying modern air to air combat is relatively simple and doesn't require that much pilot skill? If that's the case, why do all the accounts of the 1991 and 2003 Gulf Wars claim pilot skill was perhaps the single largest advantage the Americans and British had over the Iraqis? (Not a rhetorical question; I don't know what the specifics of the task consist of, and I'm curious about the answer.)
neurotech1 · 10 years ago
In Gulf War 91, the Iraqi pilots were both skilled and combat experienced from the Iran-Iraq war. Although rigorously trained and highly skilled, very few USAF pilots flying operationally had actually previously experienced air-to-air combat.

Apart from technically superior radar and weapons systems, the USAF Weapons School[0] teaches USAF fighter pilots how to engage within the envelope of those weapons.

In the early part of the Vietnam war, the air-to-air missiles kept missing the target due to poor pilot training. The result of the Ault report[1] and subsequent TOPGUN[2] program worked to remedy the shortcoming in the training. The USAF FWS was created after seeing the results of the Navy TOPGUN training.

[0] https://en.wikipedia.org/wiki/USAF_Weapons_School

[1] https://en.wikipedia.org/wiki/Ault_Report

[2] https://en.wikipedia.org/wiki/United_States_Navy_Strike_Figh...

crazypyro · 10 years ago
Not saying this has any relation in this specific case, but both of those governments have made up similar lies about pilot ability (British in WWII with radar hidden by "pilots who eat carrots to improve eyesight" is the classic example) in the past to hide technological advancements.
vlehto · 10 years ago
I'm not gulf war specialist, but most recent analysis predicts pretty consistently that if you pit decent to great pilots in F35 vs F35 with similar missiles, you get mutual kill every time. Same goes for any single plane that has BVR radar and missiles and off boresight close range missiles. Most 4th gen planes have these capabilities.

I didn't agree with this at first, but they claim that modern missiles are able to differentiate between heat signature of plane body and engine nozzle. Also modern missiles are capable of 50G turns. That G-loading tells nothing about turning circle, but it does tell surprisingly nicely how quickly you can deviate from straight line. Fighter plane loses every time from every angle at every speed.

The only problem of air-to-air missile dominance is that they burn through all of their fuel relatively quickly. Once that happens, maneuverability goes down really quick. So the "no escape zone" is crucial and it varies from missile to missile.

If you want to stay alive, you need to out-range or surprise the enemy. Either stealth, superior radar or longer range missiles.

It looks like fighters have become very expensive and very mobile SAM sites. It beats me why countries without aircraft carriers would ever pick F35. You can get ~50 trucks with Pirate IRST and METEOR missiles with the same price. More area covered at any given time while survivability goes up like hell I don't know what.

VLM · 10 years ago
"That's really all modern pilots need to do."

Recorded conversations between ground support pilots and forward air controllers would disagree. Lots of very fast paced observation, pattern matching, orientation and tough judgment calls.

If we ever fight a competent air adversary I would imagine the AWACS to pilot conversations would be fascinating.

Learning to fly a plane is like learning to throw a baseball. It takes hours at most. Of course learning to beat a pro player at their entire game not just one activity, so as to get their job, is a little harder. An interesting observation of human judgment WRT the ratio of people who think they can go pro vs the people who have the skills to go pro is not terribly inspiring WRT AI pilots, they'll be lots of coders with bravado and not much action. And learning how to lead a team to a world series win isn't even definable at this time. But yeah, toss that ball over there when I say to do it, that's a solved problem. Likewise successfully accomplishing a combat mission is a lot more complicated than "and this is how to keep the wings level and that button makes things go boom". A really smart autopilot is going to help yet isn't the only thing necessary.

Note that its possible to send a man to do a cruise missiles job, or even a plain old missile's job. That doesn't imply a cruise missile can do everything a man can do, it just means the man was mismanaged to not take full advantage of his abilities.

vbo · 10 years ago
If we assume the wars of the future to be fought by AI-driven warmachines, can we abstract the matter further and have virtual wars? Our AI versus your AI fighting on computational resources provided by, erm, Switzerland. Nobody gets hurt and no money is spent building and destroying warplanes. Everybody wins. And have a prize pot, so actual invasion of territory is not necessary. Bulletproof solution, may I say. What do you mean it won't work?
sevenless · 10 years ago
But nobody has any skin in the game that way.

The Star Trek version was to have citizens of both sides executed to match simulated casualty numbers... https://en.wikipedia.org/wiki/A_Taste_of_Armageddon

vbo · 10 years ago
To take matters even further and also somewhat address skin in the game, how about we do away with AI virtual warfare, since it too implies taxpayer money being used for eventually futile endeavours and simply organise a war-chess games between the leaders of the countries. President Trump, your move.
seren · 10 years ago
This is a part of the plot of Ian M. Banks 'Surface Detail' (2010).

Two parties agrees to wage a war in virtual worlds to decide if virtual hells should be allowed or banned. Unsurprisingly, the losing party tries to move the war in the real world to reverse the losing trend.

XorNot · 10 years ago
Well it was a little more complicated than that. And also that whole book was amazing.
adrianN · 10 years ago
The so-called "cyber war" will be a very important component of any future war between nation states. Taking over the enemies SCADA systems, power grid, net infrastructure etc. can do massive economic damage with few casualties.

I don't think we'll ever see a civilized form of war where no real harm is done, because abiding by the rules of such a war is not a game-theoretic equilibrium. If a party builds a real army in addition to the simulated army and uses it, it will win the war.

sevenless · 10 years ago
We have a Geneva Convention, and international norms against nuclear weapons, which are also pretty good deterrents themselves (we hope).

Just like nuclear powers sometimes wage limited conventional wars, it's possible to imagine a set of international laws and norms where disputes would be resolved by virtual conflicts without escalation to armed force.

Advances in robotic war machines could make them so fearsome they deterred real-world armed conflict in favor of virtual conflict. (Of course, that's what they said about the machine gun before WW1)

duncan_bayne · 10 years ago
What makes you think that disabling infrastructural targets will result in fewer casualties?

Power for heating, cooling and hospitals. Transit systems for food distribution. Computers for synchronising all of the above.

How many casualties do you think would result from shutting down infrastructure in NYC in mid winter?

Even evacuation causes casualties. I was reading an assessment of the Fukushima evacuation that suggested fewer casualties would have resulted from just staying there.

jdmichal · 10 years ago
The commitment and potential loss of resources while waging war is actually a valuable input to the system. If all war was virtual, then why not continuously wage war with everyone weaker than yourself all the time? Why would the strongest not virtually-subjugate every other nation? In the real world, resources aren't unlimited and losses are cumulative. So consideration must be made as to when it's worth committing to a war.
rm_-rf_slash · 10 years ago
Nobody that loses a virtual war is going to give up and let the other side take what they want without a very violent fight.
tomjen3 · 10 years ago
Assuming the results are pretty accurate about the outcome, why not? You can either surrender now and not have any civilians killed, or suffer the following casualties and still lose.

Heck it might even prevent war if the simulation says that both sides will suffer too high casualties to make it worth it.

teraformer · 10 years ago

  For this to work, every aspect of the simulation would 
  have to match up with real world circumstances.

  ...which would mean no secrets.

  ...or weapons that, when used, obviate many degrees of 
  secrecy, like nuclear weapons.

Deleted Comment

JoeAltmaier · 10 years ago
But it worked on Star Trek!

Dead Comment

Deleted Comment

saiya-jin · 10 years ago
I think you're missing a big reason why wars are waged - massive cash flows stemming from delivering actual destruction and subsequent rebuilding.

war is a dirty business, but business it is, and what a juicy one. simpler people hate banks/bankers, yet I haven't heard about any protesters occupying Lockheed, BAE or similar folks and wishing them jail. last thing these powerful corporations want is to change the game that works so well for them now.

sevenless · 10 years ago
This is a conspiracy theory, as while weapons manufacturers profit from war, they would profit any way governments bought, maintained or upgraded weapons. There is also no evidence that BAE or GE go out of their way to cause wars. Indeed, they would rather sell weapons to all sides. Arms races are certain profit, wars bring uncertainty, and cut off other massive cash flows (like the Iran oil embargo)
JoeAltmaier · 10 years ago
Curiously, that happens already. Simulations are run on battles before they are fought until a winning tactic is found.
jacquesm · 10 years ago
Now if only we could get the losing party to accept the outcome of their simulations too and to capitulate.

In practice, the losing party as often as not will try to inflict as much damage on the victor as they can, and more often than not what starts as a battle ends up being a long term occupation and that's when all simulations seem to break down.

Battles are 'easy', long term planning is not.

Domenic_S · 10 years ago
Original Star Trek did it!

In this episode, the crew of the USS Enterprise visits a planet whose people fight a computer-simulated war against a neighboring planet. Although the war is fought via computer simulation, the citizens of each planet have to submit to real executions inside "disintegration booths" to meet the casualty counts of the simulated attacks. The crew of the Enterprise is caught in the middle and are told to submit themselves voluntarily for execution after being "killed" in an "enemy attack".

https://en.wikipedia.org/wiki/A_Taste_of_Armageddon

coldcode · 10 years ago
Which also contains one of the best Spock lines ever: "Sir, there is a multi-legged creature crawling on your shoulder" - neck pinch, boom.
prodmerc · 10 years ago
"We lost"

"Well, they're not taking [whatever the goal was] without a real fight!"

It will then be AI/humans vs AI/humans cause one side will have to bear (real) heavy losses...

pessimizer · 10 years ago
Wars of the future will be fought by people strapping bombs to themselves, obtaining the most destructive guns they can find and all the ammo that they can carry, and attacking civilian areas. 100 million dollar AI killing machines are mainly ways to funnel state money to the connected.
marcosdumay · 10 years ago
That's not much different from how things work since the beginning of the nuclear era. Both sides compare their strength, and the weaker side often capitulates to avoid a real war. (It was less common on the past, but it always happened to some extent.)

Of course, it does not always work.

btbuildem · 10 years ago
You might find this an enjoyable read: https://en.wikipedia.org/wiki/Peace_on_Earth_(novel)
spitfire · 10 years ago
mdpopescu · 10 years ago
Or you could have the warring nations fight in an Eve Online Alliance Tournament :)
Aardwolf · 10 years ago
They did only one simulation? Strange to report on details of one single simulation when more makes sense.

Why not do hundreds of simulations, with different amounts of attacking and defending jets. Sounds like fun, must not be a problem to find pilots who want to do this simulation, it's merely hundreds of hours of gameplay :).

Or was it like, they did hundreds, but this is the only one where the AI won, and it had 4 planes while the humans had only 2?

hackuser · 10 years ago
The Pentagon is betting on human-AI teaming, called 'Centaurs'. The foundational story is this:

Back in the late 1990s, Deep Blue beat the best human chess player, a demonstration of the power of AI.

Around ten years later, a tournament of individual grandmasters and individual AIs was won by ... some amateur chess players teamed with AIs.

AIs aren't good at dealing with novel situations, humans are; they complement each other (and I'll add: unlike most other endeavors, in war the environment (the enemy) is desperately striving to confuse you and do the unexpected. Your self-parking car would have more trouble if someone was trying everything they could think of to stop it, as if their survival was at stake). Also, we strongly prefer humans make life-and-death decisions; hopefully that turns out to be realistic.

prodmerc · 10 years ago
Huh, couple that with an aircraft not bound by human limits (no life support, much faster maneuvering with no loss in decision making) and it should be awesome. And terrifying.
noir_lord · 10 years ago
There is a terrible sci-fi film called Stealth that explores some of that.
saiya-jin · 10 years ago
you mean drone?
vlehto · 10 years ago
Or missile. There is terrific range advantage if you only fly half of the round trip.
tdy721 · 10 years ago
Was this Raspberry Pi powered? This story makes that claim: http://www.newsweek.com/artificial-intelligence-raspberry-pi...

If that is true, it puts this achievement in a totally different class.

Johnny_Brahms · 10 years ago
Well, why not? the computers they have in those extreme situations are not the newest Intel xeons. Battle tested and reliable computers are years behind their more modern desktop counterparts.