I am a little scared from the distinction we are start to make between "computers" and "developers' computers"
In most computer nowadays you cannot code (tables and smartphones), are computers doomed to be an expensive tool for few "nerd" ? What will be the impact on computer literacy ?
I think the rise of P2P, file sharing, and the openness of the Internet in the last decade significantly narrowed the developer-user gap; and it's been growing since then, motivated by corporations' desire to maintain control over their users.
> motivated by corporations' desire to maintain control over their users.
I think that's only one factor, and not a majority one.
Most users don't want to have to deal with "how it works". They want a simple, easy to use tool that works reliably... And they want to call someone to "fix it" when it "breaks". That's how it works with plumbing, cars, landline phones, stereo components, televisions, and all the electronics they've ever used.
The exceptions are computers and some smartphones, which can present cryptic error messages, have weird things in their settings, and generally make a "dumb user" feel out of their element. Think about the confusion users feel when confronted with a funny noise in their car. "I'm not a mechanic, what does that noise mean?" is no different from "I'm not a computer person, what does that error mean?" What's more, the meaning of the question is not "what, mechanically/electrically, is at fault?" It is "how much time/money will it cost to get it fixed?"
It's not just a small preference, either - the height of luxury are "push button" services that "just work". Go to a high end hotel, and your room phone has just one button. Top end consumer products of all sorts struggle to be an easy-to-use "appliance". A dumbed down user interface without developer tools is user preference, it's status, it's customer comfort and pride, all tied into one.
IMO the most impressive thing about OSX is how well it supports both audiences: it feels like a push-button, high luxury, comfortable, easy device to my mother. But under the hood there are great logs and a solid BSD-based operating system model. It comes prepackaged with a lot of developer tools, hidden in a place where I would look right away, but my mother would never notice.
Sure, some companies use software to limit and control their customers (cough cough Sony), usually with sharp legal/lobbyist teeth to enforce that control. But 99% of companies out there just want to make their users feel comfortable, high status, and competent to use their device.
While I agree with RMS that this split is inevitable, I don't believe it's about control. It's about two distinct market segments: auto enthusiasts who want control over the torque settings in their high end car, and people who just want a car that fucking works. Chefs who want sector-by-sector control over their oven's heating profile, and people who just want to be able to cook a fucking roast without burning it.
I would put Instagram or SnapChat firmly into the production column. While many do not like the "output" of this production, that does not change the fact.
And a lot of music creation apps exist for tablets/phones.
You can create content on tablets, and some of it is excellent content.
Development isn't done on tablets because the input devices we have to make code are limited to a keyboard, and most people think text files are code, rather than a serialisation/deserialisation format for an AST.
You could easily build an AST with gestures and speech rather than tapping buttons, and I think in 10-20 years time that's how we'll make software.
Have you met an average user? The mandatory updates, lack of permissions and sandboxing are only a good thing for a user with typical computer literacy level.
Hell, even lack of window management in iOS/Android systems is making UX much more easier to understand for majority of users I know. My granddad, who was an excellent mechanical engineer, have been using computers for the last 20 years, and he still struggles with click/double-click distinction.
> and he still struggles with click/double-click distinction.
Have you tried teaching him that? I highly doubt an old person, especially one with engineering background, will have trouble with understanding the distinction if someone bothers explaining it to them.
Or in general - it's surprising how much non-tech people can understand about technology if someone bothers to sit down with them and explain the concepts to them. Usually the reason they don't learn this stuff themselves is the typical human impulse of "if I haven't figured it out in 3 seconds flat, it's too difficult and I won't understand it".
The mandatory updates, lack of permissions and sandboxing are only a good thing for a user with typical computer literacy level.
Only if you want to keep them illiterate, which companies are more than happy to do since it means they can be more easily persuaded and dependent consumers.
What do you mean? I pick up my Android phone and I've got an app that gives me a python shell, "terminal ide" which includes tons of cli developer tools like a C compiler and various editors, a full debian install I use for more secure SSH (using real openssh) and development and even some operations on various servers. There are full out Java IDEs on Android that you can install even.
So here's just a few ways you can code on Android:
If desktops become more expensive, it'll just mean people are more motivated to make tools like this. Android phones and tablets are basically treated as cheap commodities and there's an extremely competitive market for them, if anything, the entry price has gone down.
Now, admittedly I'm not sure how this situation is on iOS, but maybe someone could link similar tools on there?
I always find that comments like this are doom and gloom and never celebratory that we might reach a point where computers are finally stable and secure enough to be treated like appliances. The first automobiles required dozens of steps just to start the engine, did people back then lament the difference between "cars" and "mechanics' cars"?
I think this split is going to get worse, especially in the Apple ecosystem. Their apparent desire is that the iPad and Pro are the computer replacement but there isn't (or will anytime soon) a way to create applications for those platforms from that (iOS) environment. Their, admittedly market-speak, statements on stage hint that they would like to see the the tablets/phones replace desktops for the larger userbase. Odd times.
What do you mean you "cannot" code on tablets and smartphones? There are nice interpreters and compilers in the official app stores for major mobiles OS, aren't there? I've used Python on iOS, Android and Windows Phone. Also J, Ocaml, some dialects of Lisp, C# and Ruby, that I can remember now (each language on at least one of those OSes, sometimes more than one). Not to mention these devices all come with web browsers which means at the very least you can use JavaScript (I've done at least one Project Euler question on an iPod Touch in CoffeeScript standing in line at the bank.)
The tablet I currently own cost me $80 and came with a C# compiler preinstalled! (Maybe that's an extreme example: It is a Windows tablet, and Android or iOS only come with JavaScript JIT compilers preinstalled.)
Those are "second-class" or even "third-class" citizens in the ecosystem. Can you use those language interpreters and compilers to write apps that can interact with the system and exchange data with the other apps? That's what makes the traditional, document-centric, PC ecosystem so powerful.
While being able to play around with Project Euler can be fun, it amounts to "I can run a Turing-machine simulator" and doesn't represent anything more than a tiny fraction of what people want to do with computers when they say they want to "code". You may as well be playing one of the numerous puzzle games that involve much of the same concepts.
To use your iPod Touch as an example, if it were more like a traditional desktop computer, you would also be able to do things like write an app to manage your music playlists.
The tablet I currently own cost me $80 and came with a C# compiler preinstalled! (Maybe that's an extreme example: It is a Windows tablet, and Android or iOS only come with JavaScript JIT compilers preinstalled.)
Not surprising if it's a Windows tablet based on the PC architecture - those are far closer to the traditional desktop than iDevices and Androids. If by C# compiler you're referring to the one that comes with the .NET framework, that's been there since the first versions; pity it's not so well known with MS trying to push VS as hard as possible...
All of this will disappear within the next 5 years, as the distinction between programming and consumption narrows down.
It seems like a vast majority of software developers, consciously or not, do not wish for software development to improve beyond a certain point as they fear it would become too accessible and therefore lower the value of their skills. The truth is that we actively make programming as difficult as possible, and everybody loses. I can understand that writing code as text would make sense 50 years ago, but there is no excuse for this today.
Consumer UI is now reaching the 3rd dimension with AR and VR, while software development is stuck in the 1st dimension. A long linear piece of string. It is difficult to believe that those who have the power to create great consumer UX are completely blind to improving their own. Software development has some of the worst UX ever.
The solution to all of those issues has been known for a while, and is dead simple to understand. We need to create a new communication platform, powered by ideas from logic programming and the semantic web. Think of it as 2 huge semantic knowledge graphs, the first describing the real state of the world, the second describing the ideal state of the world. Build a UI on top of it (which should feel more like a graph-oriented Excel than RDF/Prolog) to let people, agents and IoT devices communicate "what is" and "what should be". Then, all it takes is an inference algorithm that can match providers with seekers, get them to commit to some set of world changes (through some sort of contract), and let people manage and track the commitments/tasks they're expected to get done. That's it, that replaces 80% of software needs. Thank you very much.
Interesting ideas (even though your prediction regarding the next five years seems rather... bold).
Where is this vision sketched in some more detail? Any links?
As I understand it, Microsoft has copied the Linux kernel system call interfaces and provided their own underlying implementation.
Given that Microsoft supported Oracle's view that the structure, sequence, and organization of the Java programming interfaces were covered by copyright law, then surely they would also agree that the same holds true for the Linux kernel system call interfaces.
I don't like the APIs-are-copyrightable decision, but given that's the current state, why aren't we talking about how this is a violation of the Linux kernel copyright license -- the GPL?
One could argue that the Linux syscall interface is closer to an ABI than an API, since you don't directly code against it. Don't know what implications that has in this context, though.
One legal thing that I'm also wondering about is the "Linux" trademark. I thought the Linux Foundation kept close tabs on how you were allowed to use the trademark, and one requirement was that the Linux kernel was actually involved?
> One legal thing that I'm also wondering about is the "Linux" trademark.
This probably explains why they never talk about Linux (at least I never saw it), but always about Ubuntu. I guess they have an agreement with Canonical.
I'd say that this part of the ABI is definitely something people code directly against - so there's really no distinction between API and ABI. The parts of the ABI you don't are things like the function calling conventions, type widths, etc.
Do you really think a multi billion dollar company like Microsoft wouldn't have their legal team all over this? Do you not think they would have researched this out. Discussed their implementation, and made sure everything they were doing was going to meet the GPL copyright standards?
This same "multi billion dollar" company had an AI bot tweeting Nazi propaganda a week ago. They spectacularly failed in their xbox one release, having to completely retool and regroup. Their Windows Phone efforts remain a complete disaster and are now doomed to failure.
The whole "they're a big company...don't you think they've thought of this!" argument (and its many "do you really think they'll lose?" variations) is always a fallacy. That doesn't make the argument about the copyright of ABIs valid, but at the same time the notion that Microsoft is big therefore they must be right is absurd.
You make it sound as if the "law" is easy. Everyone can have their own interpretation of the law, and often those interpretations are complete opposite. That's why we have two sides in a court of law.
Microsoft's lawyers likely decided that the move is "worth the risk". But they wouldn't be able to be 100% sure that it's either legal or illegal anyway. You can only be 100% sure after someone challenges you in Court, and then judges decide a certain way.
"As I understand it, Microsoft has copied the Linux kernel system call interfaces and provided their own underlying implementation."
Not sure your understanding is correct, but in any case is that not precisely what Wine does on Linux when running Windows apps? Are you worried about Windows copyright violations with Wine? From Wine webpage, "Wine translates Windows API calls into POSIX calls on-the-fly. . ." [1]
Windows doesn't provide consistency between versions at the interrupt level, and Wine doesn't provide any interface at that level. It's basically a COFF loader, and a bunch of regular userspace functions in DLLs that do everything.
Linux, on the other hand, provides exactly that, and this wrapper makes it so that you can actually run "movl $1, %eax; movl $0, %ebx; int $0x80" and it will actually call the equivalent of exit(0).
What's interesting to me right now is whether or not Microsoft is saying it's OK in one context (linux interfaces on windows), and not the other (java interfaces on android), and why are they different?
MS doesn't seem to think that long-standing programming practices have suddenly been outlawed, as it is continuing to implement Apple's proprietary iOS frameworks in Project Islandwood (https://github.com/Microsoft/WinObjC). The ruling on API copyrightability does not set precedent in any of the courts that normally hear copyright cases.
libc is LGPL. The "problem" you're highlighting here is EXACTLY opposite what the entire open source movement was designed to prevent. It couldn't be less of an issue.
This new Microsoft is open sourcing everything, maybe this own underlying implementation is on the way to be opened (maybe it's open somewhere already).
This implementation is definitively a work derived from the NT kernel. If reimplementing and API propagates copyright, the only allowed licensing situation would be that MS releases the complete NT kernel under the GNU GPLv2 license.
This is not going to happen. And this is not a too bad news, because this might give us leverage (by estoppel) if MS ever wants to litigate against free software on the theory that reimplementing API propagates copyright.
The new Microsoft is the same one that has been continuously and recently involved in copyright litigation. That didn't just go away overnight because they started a PR campaign.
I have to say after the initial excitement, I'm a bit disappointed about how this is implemented. Apparently, there is no or little interaction between the Linux world and the Windows world in this system. I don't see the benefits over running a classical Linux-as-a-process like coLinux, or something like Cygwin or MinGW.
The option to run unmodified executables is nice if you have closed-source linux binaries, but they are rare, and this is directed towards developers and not deployment anyway (where this might be a useful feature).
When I heard "Linux subsystem", I was hoping for a fuller integration. Mapping Linux users to Windows users, Linux processes to Windows processes etc.. I want to do "top" in a cmd.exe window and see windows and linux processes. Or for a more useful example, I want to use bash scripts to automate windows tools, e.g. hairy VC++ builds. And I thought it would be possible to throw a dlopen in a Linux program and load Windows DLLs. Since I don't need to run unmodified Linux binaries, I don't see what this brings to me over cygwin.
I am hoping though that this might be a bit more stable (due to ubuntu packages) and faster than Cygwin, and that it might push improvements of the native Windows "console" window.
I would bet that it will only get better over time and include a fair amount of the things you are talking about in a few years time. It's obviously a big push to pick up the current "I use a mac for development cause it's unix" crowd, I'm sure they're taking it seriously and would want the support for developing in a unix style A+.
How will they pick up the crowd who chooses their OS because it's mostly FOSS and not a branch of the NSA used for mass data collection? Most devs I know care about privacy and it was one of the main reasons they switched from Windows.
Most Linux programmers I know aren't Windows devs as much as the MS shill team would like everyone on social media to believe.
You don't have to be part of the FOSS crowd to support FOSS. I'd wager the majority of the programmers you know would be ecstatic for Windows or OS X to go open source, and if they use OS X/iOS they probably do care about their privacy.
I don't know a single developer that uses a windows phone or a Windows workstation purely out of choice, most devs I know that are ingrained in Windows are using it because they have to.
Stack overflow statistics show that programmers disproportionately choose OS X and Linux over Windows when compared to typical desktop usage (Linux use skyrockets among programmers compared to desktop).
These "Linux programmers who want Windows" only exist on the internet as far as I can tell. No one actually wants to use Windows.
Mapping the processes across implies all sorts of strange things - what happens if you try to send a Linux signal to a Windows process?
Mapping the users is possible and "SFU" did this, with a couple of caveats (Windows requires group and user names to be different, while UNIX systems often have groups with the same name as users).
I don't think this is a Linux or GNOME killer, but it might put a dent in Cygwin and git-bash.
Wine somehow solve that. Even if almost nobody use that Windows application still able to use native APIs if it's detect that it's running in Wine. E.g for example Windows Steam client checked Wine version long before native Steam appear.
Windows does have signals, just not nearly as many as Unixes, and are mostly built around how to kill a process. So I imagine all the non-terminate ones will be mapped to just be ignored.
Hmm, I guess I'd assumed that I'd be able to use bash scripts to automate Windows functionality (that was probably the most exciting part for me!). You're saying that's not currently possible?
What about more basic things, like moving files around, etc.?
I'd be happy if I never had to write a (Windows) batch script again...
Oh man, thanks for the tip! Works wonderfully. I just apt-get'ed synaptic and it seems totally functional :) Xemacs and Angband don't work, but the fact that so much works already bodes pretty well for the future.
I currently don't have access to a Windows box, but am currently working on a CLI app in Swift for OSX and Linux. It would be interesting to see if this effectively makes swift cross-platform "for free".
I did not try GUI stuff, but when my tablet went to sleep while executing a long-running command, when I returned to the bash shell, the command failed with a message stating "interrupted syscall" or the like. Not sure if this is the common/intended behaviour.
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.4, 256 bits)
OpenGL version string: 2.1 Mesa 10.1.3
OpenGL shading language version string: 1.30
glxgears works for about a second, then crashes:
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server "localhost:0.0"
after 732 requests (732 known processed) with 0 events remaining.
I wonder who came up with the "Bash on Windows" tagline. That was a really smart idea. I think most of us would have run with "Emulated Linux syscall layer from user mode processes on Windows". Promoting bash specifically seems to me like engineering marketing genius -- less technically knowledgeable people are more likely to be familiar with bash, while the more knowledgeable are going to think "wait...what? how do they do that? that would mean...", which works better than simply saying what they have done.
Is this sarcasm? Bash on Windows definitely comes before "Emulated Linux syscall layer from user mode processes on Windows" ... it's a great name, sure, but marketing genius?
Smart move by Windows. I guess that developer usage of an OS ultimately results in developer developments for the OS, though I don't have any number for this. It seems to me that a lot of developers, especially at startups, have switched to OS X with its shiny GUI and UNIX compatibility. I'd hazard the guess that this will ultimately result in OS X becoming more of a developer target over time. Initially for developer-related stuff (see Dash as an example that is only available for OS X (and Zeal for Linux)), but later probably for other stuff as well.
What's illustrative for the dominance of *NIXes in development are the number of projects on Github that contain only +NIX installation instructions and no Windows instructions (again, anecdata).
So if Windows wants to remain competitive, they need to retain developers. And as the +nix way of developing seems to be dominant now in quite a number of fields, Microsoft needs to adapt.
Why, you're asking, do I think that the +NIX way of development is dominant today? In a nutshell, Web -> Unix Servers -> POSIX shells -> Languages that work best with POSIX -> OSs that are POSIX-compliant.
Edit: Asterisks don't work as expected here. At least not in a Markdown-compatible way.
Is it that smart?
Being developer friendly sounds like just plain common-sense, not some genius breakthrough.
The question should be more why has it taken them so long to get to this point.
They always tried to be Microsoft developer friendly, and I think sort of assumed that UNIX was going to go away when they won. But it's now clear that UNIX has won for web services, and the web has beaten old-style client-server. And
the Windows remote admin/cloud admin/mass deployment features appear to have lost as well.
Maybe. It is definitely the common-sense thing to do today, five years ago, it would have been smart. From a pre-Nadella perspective, you could have called it revolutionary, but now we're used to Microsoft participating in OSS, so it's much less so.
I'd just like to interject for a moment. What you’re referring to as Windows, is in fact, GNU/Windows, or as I’ve recently taken to calling it, GNU plus Windows.
Linux users were going around for a while saying "this is the year of linux on the desktop", and yeah it kind of turned into a bit of a meme.
Realistically linux did hit it big, but on a phone OS. It's now one of the most installed kernels in the world, but its brand is hidden. Linux is also incredibly important in the server space, and everyone knows this.
Linux will never have its year on the desktop in my opinion, but it will still be all over the place in the server/phone space. It just won out in other areas than the desktop.
A bit offtopic, but 2009 was the year of the Linux Desktop for me (using Ubuntu). Performant on older hardware, eye candy (compiz) on newer hardware, the Gnome 2 interface was familiar and incredibly polished, and you had a lot of choice (linuxshouldbeaboutchoice.com).
They lost me with all the rewriteritis and monodaemonisation that followed. I switched to MacOS (hackintosh) and was very happy for a while, since it could run all the Unix stuff, most of the productivity stuff (MS Office), and many games. It was for a long time the most plain, conservative OS (while Windows was going crazy with 8).
But recently, I've found Windows to be the OS that "just works" and gets out of my way - which was pretty surprising to me.
If anybody killed alternative desktops, it is not MS, but the desktops themselves.
>I've found Windows to be the OS that "just works" and gets out of my way
I've had the opposite experience. Windows does not "just work" and it certainly does not "stay out of the way".
I have USB headphones I can't use in Windows because they connect but Windows doesn't let me switch to them. When I plug in an external monitor my OS comes to a crawl and it doesn't speed back up until I restart the whole thing. When I unplug a monitor it loses my windows.
And did you hear the story about the guy who lost his job because Windows decided to update the .NET framework right before he was scheduled to do a presentation at a business meeting? Doesn't sound like Windows stays out of the way to me.
I wish Windows "just worked" but it doesn't. It breaks all the time unless you're a power user. Giving my parents Linux was the best thing I ever did for them because it turned their laptops from a source of constant frustration to an always-on communication machine. We went from hundreds of ads and dozens of toolbars on windows to a Linux machine that just works.
Now I'm just trying to get my dad to switch to Linux for work so he doesn't have to install his printer drivers again every time he wants to print something. All he uses for work is Chrome any way.
2007 was the year of Linux on the Desktop - with netbooks, Linux was literally competition for Windows on the desktop for the first time and Microsoft had to make XP super-cheap.
Pretty much, finally MS decided to make it happen. :) On a more serious note, you can get all of the benefits of windows and linux in one without docker or any vm running. This is great!
No, Linux on the Desktop has just died. I expect both the KDE and Gnome projects dead within a (very few) years, probably X.org close behind.
All hail Winux though. (That's the name for this mix I came up with.)
Before you downvote this without thinking ... consider, for example, KDE is severely understaffed and this will deplete them further. Who will bother with X.org bugs and drivers now? What's the point? Who is your target audience? You need to drink a real big dose of Stallman kool-aid to continue with Linux if this thing on Windows works as promised.
I have been using Linux solely on my laptop since 2004. I am sick of the constant driver problems. Yes, yes, you can connect to your home router or the router in the cafe. Now go and try and connect to an enterprise network. Perhaps with VPN.
KDE has always been understaffed and will always be understaffed.
But it has been improving all the time with every single release.
The problem is that you (and millions of people who were looking for Linux desktop to "win") are just not excited anymore since new form factors (phones, tablets) arrived.
But I actually think Linux desktop is a winner. There are several high quality desktop enviroments suitable for all kinds of use cases.
Yeah we are not dominating the world. That was a short naive dream in the early 2000. But we have awesome desktops and thats what matters.
Disclaimer: minor KDE contributor but these were my thoughts not KDE's.
I do all of the things you mention without problem. I don't definitely don't think GNU/Linux will die as a result. First off, syscall emulation will always be clunky. Secondly, many people care about their freedom. Thirdly, what makes you think that a majority of people using GNU/Linux will switch. I haven't had driver or network problems for the past 3 years on any of my various machines.
Keep in mind Windows 10 let's the folks at Redmond remotely remove software from you machine. Before you go declaring Linux dead you might want to think about how that could impact you. Not to mention that Microsoft has been known to Embrace, Extend, Extinguish.
It becomes even more approachable by "Winux". Let people learn the basic of the CLI and get comfy with more open source tools -- then reinstall your computer to a Linux distro (and put your Win-only apps in a VM or on Wine) is a small move.
As I understand it, vendors will probably have paying clients for Linux desktops for some time to come. It is very hard to imagine a situation where the demand for a Linux desktop become so low that no-one will maintain the required software infrastructure.
I've deemed it "Frankenstein OS" because they've sewn a whole bunch of parts together to make an unwieldy monster that doesn't quite work as good as the individual pieces did on their own.
In most computer nowadays you cannot code (tables and smartphones), are computers doomed to be an expensive tool for few "nerd" ? What will be the impact on computer literacy ?
http://boingboing.net/2012/08/23/civilwar.html
http://boingboing.net/2012/01/10/lockdown.html
...and RMS predicted this almost 20 years ago:
http://www.gnu.org/philosophy/right-to-read.en.html
I think the rise of P2P, file sharing, and the openness of the Internet in the last decade significantly narrowed the developer-user gap; and it's been growing since then, motivated by corporations' desire to maintain control over their users.
I think that's only one factor, and not a majority one.
Most users don't want to have to deal with "how it works". They want a simple, easy to use tool that works reliably... And they want to call someone to "fix it" when it "breaks". That's how it works with plumbing, cars, landline phones, stereo components, televisions, and all the electronics they've ever used.
The exceptions are computers and some smartphones, which can present cryptic error messages, have weird things in their settings, and generally make a "dumb user" feel out of their element. Think about the confusion users feel when confronted with a funny noise in their car. "I'm not a mechanic, what does that noise mean?" is no different from "I'm not a computer person, what does that error mean?" What's more, the meaning of the question is not "what, mechanically/electrically, is at fault?" It is "how much time/money will it cost to get it fixed?"
It's not just a small preference, either - the height of luxury are "push button" services that "just work". Go to a high end hotel, and your room phone has just one button. Top end consumer products of all sorts struggle to be an easy-to-use "appliance". A dumbed down user interface without developer tools is user preference, it's status, it's customer comfort and pride, all tied into one.
So 99% of companies end up designing their interfaces like that hotel phone: http://salestores.com/stores/images/images_747/IPN330091.jpg
IMO the most impressive thing about OSX is how well it supports both audiences: it feels like a push-button, high luxury, comfortable, easy device to my mother. But under the hood there are great logs and a solid BSD-based operating system model. It comes prepackaged with a lot of developer tools, hidden in a place where I would look right away, but my mother would never notice.
Sure, some companies use software to limit and control their customers (cough cough Sony), usually with sharp legal/lobbyist teeth to enforce that control. But 99% of companies out there just want to make their users feel comfortable, high status, and competent to use their device.
While I agree with RMS that this split is inevitable, I don't believe it's about control. It's about two distinct market segments: auto enthusiasts who want control over the torque settings in their high end car, and people who just want a car that fucking works. Chefs who want sector-by-sector control over their oven's heating profile, and people who just want to be able to cook a fucking roast without burning it.
Tablets and phones are consumption. You can't do any serious work on them - development included.
This is why laptops and computers have stuck around in spite of the proliferation of cheap, tiny, elegant consumption devices.
So no, I don't think laptops and computers will go away for non-nerds, just for people who don't produce anything.
And a lot of music creation apps exist for tablets/phones.
This production/consumption divide is too rigid.
Development isn't done on tablets because the input devices we have to make code are limited to a keyboard, and most people think text files are code, rather than a serialisation/deserialisation format for an AST.
You could easily build an AST with gestures and speech rather than tapping buttons, and I think in 10-20 years time that's how we'll make software.
No. Because of the Glorious PC Master Race - mods, trainers, hacks, overlays etc - these all need dev and root access.
Btw- game modding, cracking, save game editing etc - are the best gateway drugs towards full blown IT career.
http://www.ubuntu.com/tablet/developershttps://plus.google.com/u/0/105864202742705090915/posts/jNvZ...
Hell, even lack of window management in iOS/Android systems is making UX much more easier to understand for majority of users I know. My granddad, who was an excellent mechanical engineer, have been using computers for the last 20 years, and he still struggles with click/double-click distinction.
Have you tried teaching him that? I highly doubt an old person, especially one with engineering background, will have trouble with understanding the distinction if someone bothers explaining it to them.
Or in general - it's surprising how much non-tech people can understand about technology if someone bothers to sit down with them and explain the concepts to them. Usually the reason they don't learn this stuff themselves is the typical human impulse of "if I haven't figured it out in 3 seconds flat, it's too difficult and I won't understand it".
Only if you want to keep them illiterate, which companies are more than happy to do since it means they can be more easily persuaded and dependent consumers.
I don't think anyone ever has.
So here's just a few ways you can code on Android:
QPython: https://play.google.com/store/apps/details?id=com.hipipal.qp...
AIDE (Java): https://play.google.com/store/apps/details?id=com.aide.ui&hl...
Terminal IDE: https://play.google.com/store/apps/details?id=com.spartacusr...
If all else fails, just deploy debian with Linux Deploy: https://play.google.com/store/apps/details?id=ru.meefik.linu...
If desktops become more expensive, it'll just mean people are more motivated to make tools like this. Android phones and tablets are basically treated as cheap commodities and there's an extremely competitive market for them, if anything, the entry price has gone down.
Now, admittedly I'm not sure how this situation is on iOS, but maybe someone could link similar tools on there?
For one thing, a Raspberry Pi is more powerful than the Sinclair ZX-81, Apple IIe, or Atari 400/800 I had access to back then, and much cheaper.
The tablet I currently own cost me $80 and came with a C# compiler preinstalled! (Maybe that's an extreme example: It is a Windows tablet, and Android or iOS only come with JavaScript JIT compilers preinstalled.)
While being able to play around with Project Euler can be fun, it amounts to "I can run a Turing-machine simulator" and doesn't represent anything more than a tiny fraction of what people want to do with computers when they say they want to "code". You may as well be playing one of the numerous puzzle games that involve much of the same concepts.
To use your iPod Touch as an example, if it were more like a traditional desktop computer, you would also be able to do things like write an app to manage your music playlists.
The tablet I currently own cost me $80 and came with a C# compiler preinstalled! (Maybe that's an extreme example: It is a Windows tablet, and Android or iOS only come with JavaScript JIT compilers preinstalled.)
Not surprising if it's a Windows tablet based on the PC architecture - those are far closer to the traditional desktop than iDevices and Androids. If by C# compiler you're referring to the one that comes with the .NET framework, that's been there since the first versions; pity it's not so well known with MS trying to push VS as hard as possible...
Dead Comment
It seems like a vast majority of software developers, consciously or not, do not wish for software development to improve beyond a certain point as they fear it would become too accessible and therefore lower the value of their skills. The truth is that we actively make programming as difficult as possible, and everybody loses. I can understand that writing code as text would make sense 50 years ago, but there is no excuse for this today.
Consumer UI is now reaching the 3rd dimension with AR and VR, while software development is stuck in the 1st dimension. A long linear piece of string. It is difficult to believe that those who have the power to create great consumer UX are completely blind to improving their own. Software development has some of the worst UX ever.
The solution to all of those issues has been known for a while, and is dead simple to understand. We need to create a new communication platform, powered by ideas from logic programming and the semantic web. Think of it as 2 huge semantic knowledge graphs, the first describing the real state of the world, the second describing the ideal state of the world. Build a UI on top of it (which should feel more like a graph-oriented Excel than RDF/Prolog) to let people, agents and IoT devices communicate "what is" and "what should be". Then, all it takes is an inference algorithm that can match providers with seekers, get them to commit to some set of world changes (through some sort of contract), and let people manage and track the commitments/tasks they're expected to get done. That's it, that replaces 80% of software needs. Thank you very much.
Knowledge Graph -> Semantic Marketplace -> Smart Contracts -> Task Management
Given that Microsoft supported Oracle's view that the structure, sequence, and organization of the Java programming interfaces were covered by copyright law, then surely they would also agree that the same holds true for the Linux kernel system call interfaces.
I don't like the APIs-are-copyrightable decision, but given that's the current state, why aren't we talking about how this is a violation of the Linux kernel copyright license -- the GPL?
One legal thing that I'm also wondering about is the "Linux" trademark. I thought the Linux Foundation kept close tabs on how you were allowed to use the trademark, and one requirement was that the Linux kernel was actually involved?
This probably explains why they never talk about Linux (at least I never saw it), but always about Ubuntu. I guess they have an agreement with Canonical.
The whole "they're a big company...don't you think they've thought of this!" argument (and its many "do you really think they'll lose?" variations) is always a fallacy. That doesn't make the argument about the copyright of ABIs valid, but at the same time the notion that Microsoft is big therefore they must be right is absurd.
Microsoft's lawyers likely decided that the move is "worth the risk". But they wouldn't be able to be 100% sure that it's either legal or illegal anyway. You can only be 100% sure after someone challenges you in Court, and then judges decide a certain way.
but legally speaking, they seem to have adopted that culture.
Not sure your understanding is correct, but in any case is that not precisely what Wine does on Linux when running Windows apps? Are you worried about Windows copyright violations with Wine? From Wine webpage, "Wine translates Windows API calls into POSIX calls on-the-fly. . ." [1]
[1] https://www.winehq.org/
Linux, on the other hand, provides exactly that, and this wrapper makes it so that you can actually run "movl $1, %eax; movl $0, %ebx; int $0x80" and it will actually call the equivalent of exit(0).
What's interesting to me right now is whether or not Microsoft is saying it's OK in one context (linux interfaces on windows), and not the other (java interfaces on android), and why are they different?
This is not going to happen. And this is not a too bad news, because this might give us leverage (by estoppel) if MS ever wants to litigate against free software on the theory that reimplementing API propagates copyright.
Deleted Comment
/s
The option to run unmodified executables is nice if you have closed-source linux binaries, but they are rare, and this is directed towards developers and not deployment anyway (where this might be a useful feature).
When I heard "Linux subsystem", I was hoping for a fuller integration. Mapping Linux users to Windows users, Linux processes to Windows processes etc.. I want to do "top" in a cmd.exe window and see windows and linux processes. Or for a more useful example, I want to use bash scripts to automate windows tools, e.g. hairy VC++ builds. And I thought it would be possible to throw a dlopen in a Linux program and load Windows DLLs. Since I don't need to run unmodified Linux binaries, I don't see what this brings to me over cygwin.
I am hoping though that this might be a bit more stable (due to ubuntu packages) and faster than Cygwin, and that it might push improvements of the native Windows "console" window.
* https://news.ycombinator.com/item?id=11416392
... would you, too, agree with a call for its resurrection?
* https://news.ycombinator.com/item?id=11391841
* https://news.ycombinator.com/item?id=11391798
Most Linux programmers I know aren't Windows devs as much as the MS shill team would like everyone on social media to believe.
You don't have to be part of the FOSS crowd to support FOSS. I'd wager the majority of the programmers you know would be ecstatic for Windows or OS X to go open source, and if they use OS X/iOS they probably do care about their privacy.
I don't know a single developer that uses a windows phone or a Windows workstation purely out of choice, most devs I know that are ingrained in Windows are using it because they have to.
Stack overflow statistics show that programmers disproportionately choose OS X and Linux over Windows when compared to typical desktop usage (Linux use skyrockets among programmers compared to desktop).
These "Linux programmers who want Windows" only exist on the internet as far as I can tell. No one actually wants to use Windows.
Mapping the users is possible and "SFU" did this, with a couple of caveats (Windows requires group and user names to be different, while UNIX systems often have groups with the same name as users).
I don't think this is a Linux or GNOME killer, but it might put a dent in Cygwin and git-bash.
I think Microsoft can do something similar.
https://msdn.microsoft.com/en-us/library/xdkz3x12.aspx
Performs the default action as if it were a Linux process. Mostly terminate or ignore.
What about more basic things, like moving files around, etc.?
I'd be happy if I never had to write a (Windows) batch script again...
No. Execution mode incompatible - see e.g. https://github.com/wishstudio/flinux/wiki/Difference-between... for details.
What would really interest me: how was fork() implemented by MS here? The same method as http://stackoverflow.com/questions/985281/what-is-the-closes... or have different interfaces been created?
can you check what happens after you wake from sleep/hibernation? are those apps still fully functioning?
glxinfo reports
glxgears works for about a second, then crashes: When i run it from strace, it keeps running.What's illustrative for the dominance of *NIXes in development are the number of projects on Github that contain only +NIX installation instructions and no Windows instructions (again, anecdata).
So if Windows wants to remain competitive, they need to retain developers. And as the +nix way of developing seems to be dominant now in quite a number of fields, Microsoft needs to adapt.
Why, you're asking, do I think that the +NIX way of development is dominant today? In a nutshell, Web -> Unix Servers -> POSIX shells -> Languages that work best with POSIX -> OSs that are POSIX-compliant.
Edit: Asterisks don't work as expected here. At least not in a Markdown-compatible way.
* https://news.ycombinator.com/item?id=11446696
That being said, I'm typing this comment my workstation running Linux and I for one am getting very tired of this year of the Linux desktop joke.
What OS you run is an individual choice, stop trying to declare a single winner.
Realistically linux did hit it big, but on a phone OS. It's now one of the most installed kernels in the world, but its brand is hidden. Linux is also incredibly important in the server space, and everyone knows this.
Linux will never have its year on the desktop in my opinion, but it will still be all over the place in the server/phone space. It just won out in other areas than the desktop.
You != masses.
There might not be a clear single winner, but there is a clear single loser. Statistically speaking.
They lost me with all the rewriteritis and monodaemonisation that followed. I switched to MacOS (hackintosh) and was very happy for a while, since it could run all the Unix stuff, most of the productivity stuff (MS Office), and many games. It was for a long time the most plain, conservative OS (while Windows was going crazy with 8).
But recently, I've found Windows to be the OS that "just works" and gets out of my way - which was pretty surprising to me.
If anybody killed alternative desktops, it is not MS, but the desktops themselves.
I've had the opposite experience. Windows does not "just work" and it certainly does not "stay out of the way".
I have USB headphones I can't use in Windows because they connect but Windows doesn't let me switch to them. When I plug in an external monitor my OS comes to a crawl and it doesn't speed back up until I restart the whole thing. When I unplug a monitor it loses my windows.
And did you hear the story about the guy who lost his job because Windows decided to update the .NET framework right before he was scheduled to do a presentation at a business meeting? Doesn't sound like Windows stays out of the way to me.
I wish Windows "just worked" but it doesn't. It breaks all the time unless you're a power user. Giving my parents Linux was the best thing I ever did for them because it turned their laptops from a source of constant frustration to an always-on communication machine. We went from hundreds of ads and dozens of toolbars on windows to a Linux machine that just works.
Now I'm just trying to get my dad to switch to Linux for work so he doesn't have to install his printer drivers again every time he wants to print something. All he uses for work is Chrome any way.
All hail Winux though. (That's the name for this mix I came up with.)
Before you downvote this without thinking ... consider, for example, KDE is severely understaffed and this will deplete them further. Who will bother with X.org bugs and drivers now? What's the point? Who is your target audience? You need to drink a real big dose of Stallman kool-aid to continue with Linux if this thing on Windows works as promised.
I have been using Linux solely on my laptop since 2004. I am sick of the constant driver problems. Yes, yes, you can connect to your home router or the router in the cafe. Now go and try and connect to an enterprise network. Perhaps with VPN.
But it has been improving all the time with every single release. The problem is that you (and millions of people who were looking for Linux desktop to "win") are just not excited anymore since new form factors (phones, tablets) arrived.
But I actually think Linux desktop is a winner. There are several high quality desktop enviroments suitable for all kinds of use cases.
Yeah we are not dominating the world. That was a short naive dream in the early 2000. But we have awesome desktops and thats what matters.
Disclaimer: minor KDE contributor but these were my thoughts not KDE's.
Btw, give KDE a try its so good these days :)
It becomes even more approachable by "Winux". Let people learn the basic of the CLI and get comfy with more open source tools -- then reinstall your computer to a Linux distro (and put your Win-only apps in a VM or on Wine) is a small move.
I've deemed it "Frankenstein OS" because they've sewn a whole bunch of parts together to make an unwieldy monster that doesn't quite work as good as the individual pieces did on their own.