I'm not sure I expected it to take the world "by storm", but I expected more from Google Wave. It was a great concept, by a major company, but it was too slow on release and the rollout killed it. In retrospect, it probably wouldn't have lasted anyways. Google was in the process of minimizing their federated communication services by that point, and that was another major selling point of Google Wave's initial proposal.
But the rollout, that was just the worst way to ever get a product into the hands of users. If you got in they gave you some number of invites. You'd set them up to be sent out to your friends or coworkers or whatever. Turned out, it just put them in a queue to eventually get an invite. Wave was fundamentally a collaboration platform, without anyone to collaborate with it had zero value. Fantastic way to fail.
Wave's rollout was botched, its performance requirements were too ahead-of-their-time for 2009 hardware, and its UX (which didn't seem to "get" that sometimes you need to isolate your current work context from the global feed of activity in order to focus) was perplexing. But its legacy of helping to popularize Operational Transforms (and by extension, CRDTs) beyond academic circles did indeed live on - for instance, OT was adopted in Google Docs [0], and CRDTs were used in yesterday's release of Atom's real-time collaborative editing [1]. So in many ways, it accomplished exactly what to set out to do.
> its performance requirements were too ahead-of-their-time for 2009 hardware
For 2009 hardware running javascript in a browser, you mean? I was running much fancier native stuff much faster (including 3d games) on low-to-mid tier 2009 hardware.
> So in many ways, it accomplished exactly what to set out to do.
I think it definitely accomplished its goals technically, but it certainly failed on adoption. I personally never got into it. I tried it quite early and didn't like it. It was clunky and confusing and didn't seem very useful to me at the time. I wonder if it were re-created with slicker UX (and good performance), would I like it...
Internally in Google it was really, really hard to get in the Wave team. So many people felt the opportunity to show what they can and get a career boost. So, presumably, they had best Google engineers working on that product.
What I want to say is an obvious thing -- stellar team doesn't imply success.
thanks for the anecdote! the politics must have been savage. also means the team probably cared more about being on a cool project than the actual problem area itself.
The alternative facts: maybe the great team they had did maximize their chance of success, it was just one of those 90% of "startups" that don't hit the market.
True. Chat bots supply some of the functionality. In Wave you could have bots which would replace, alter, or add content based on things you typed in or some other rules. Like you could have a stock ticker bot, or one that inserted maps, or ones to do live translations.
The collaborative document editing in Google Docs improved afterwards, the Google Wave wikipedia entry doesn't mention this but I have some vague recollection of some of their work being migrated into Google Docs to improve it in that regard. That may be completely wrong though since I can't seem to verify it.
The difference is once you got a Gmail account you could immediately start sending and receiving email from it rather than twiddling your thumbs till someone you know got invited.
Gmail invites, by 2004 or 2005(?) when I got in were instant. If I sent the invite, the friend had a chance to sign up right then.
But email was already an existing protocol. So invites meant nothing. I could still communicate with my family even though they were still on their ISP account. Or my sister with her university account.
Unlike Wave, you didn't have to wait for your friends and colleagues to get a Gmail account before you could talk with them. You could talk with anyone.
There was a time when Napster, Kazaa, eMule were king. The content industry fought against it, but developers came up with decentralized solutions like DHTs or supernodes.
I was convinced the next step would be friend-of-a-friend sharing. You only share files with your direct friends. But they can pass on these files automatically, so their friends can get the files, too. The friends of your friend don't know that the file is originally coming from you, it is completely transparent. The goal is to not let anybody you don't trust know what you are down- or uploading.
I would piggyback on the social graph of existing networks to get started. I actually had a prototype of the friend discovery code, based on XMPP (using custom stanzas). You login with your credentials, and it shows you all your friends that are using the same app. It worked with GTalk, Facebook Messenger, and Skype (via Skype API). One nice feature was that this worked without getting an API key from Facebook etc., or having a central server. I was so pissed when they all changed their APIs to make this impossible. It felt like a conspiracy to stop this use case.
I still think if somebody pulled this off, it might work, and it would be pretty distruptive (in the good and the bad sense of the word). It would be the last word in the debate between "data wants to be free, change society to enable what technology promises" and "data has to be a commodity, restrict technology because in capitalism we can't feed artists otherwise".
I, personally, felt that Audiogalaxy was the hallmark program from this era. It was the most surefire way to get the songs you were looking for and discover plenty more in the process.
You'd set the music folder you wanted to share (there was no opting out) and so long as you had the program open your files were available for download by other users. The program operated like an always on satellite torrent program with very low impact. You'd find a file to download and the file would come in many chunks from any available user who had the file on their client side. Downloads were fast and in the event you lost connection or closed Audiogalaxy the download would resume immediately on reload.
From my POV Audiogalaxy was extremely threatening to Copyright holders in a way that prior p2p programs weren't. Your comment about software being especially 'disruptive' and ruffling all of the right feathers is what reminded me of Audiogalaxy.
There was no indication of who you were receiving files from. There were no usernames or features outside of download and upload. In terms of creating a piece of software that picked a mission and executed it I'll always look at Audiogalaxy and think it achieved precisely what it set out to do, entirely.
Soulseek is still alive with the same system. The program does not make you share anything but users can disable sharing files with users who don't share files.
At least in 2012-14 Soulseek was very popular with extremely niche and obscure music fans (stuff not even on What.cd!). Not sure how it is now.
While Audiogalaxy was great at doing P2P, the magical feature that differentiated it from the competition was that all known files were always indexed and available for download. This was not true about, say, Kazaa or Soulseek; when someone closed their client, their files would disappear from searches.
This meant that you could search out a whole bunch of files you wanted, and queue them up for downloading, and whenever the sharers came back online the downloads would start and go through. These days, BitTorrent operates the same way.
Unforunately, the rise of streaming content services has kinda killed off the need for file sharing for most people.
Back in the day of mytunes on a school network, I had HDs full of mp3s. I actively browsed friends (and complete strangers) libraries to find new things, complete an artists discography... etc.
Now? Spotify Premium is probably the best value/$ of any purchase I make in a month.
Even if I wanted to go back to my old ways... its just not there any more. I believe iTunes patched out the features that made mytunes possible, and people don't have computers full of mp3s discoverable on a network anymore.
> the rise of streaming content services has kinda killed off the need for file sharing for most people.
The history is a bit different. Napster was sued out of existence. As I recall, this was done by the record labels, but the argument was that file sharing was unfair to artists. Which it was. But that was obviously not the real point. Streaming services are notoriously stingy when it comes to paying artists for their work. So access to music is still easy, and the net change is that new middlemen are profiting wildly.
(Streaming vs. file sharing is a different issue. I want to own the music I listen to. I don't want to be dependent on an internet connection, a server, a revokable license, etc. )
I can get all the music I can listen to for $15 a month (Google Play Family which comes with YouTube Red) and share it with my wife and 4 kids. I think that people can get music on SoundCloud, Pandora or YouTube made the illegal less necessary. I personally prefer to just pay the money and get my music on all my devices and upload the music I have that isn't available.
I also LOVE YouTube Red and it helps the content creators multiple of times more then my little ad revenue. How is it that people flaged YouTube Red as anti-content creators and were going after Google with pitch forks and now content creators keep saying that Red Members give them so much more per person. One creator says 1/4 of his income is from YouTube Red (He makes a living on YouTube).
From a revenue point of view, streaming is not a very good way to support an artist[1][2]. The Verge article I linked to states that artists are paid "somewhere between $0.006 and $0.0084" per stream.
If we are very pessimistic about the royalties of a CD sale - let's call it 20%, and say a single CD costs $10, that's ($10 x 20%) / 0.0084 = 238 streams to cover the cost of a single CD.
If you take audiophile-level paranoia like buying vinyls, SA-CDs or FLAC masters, I think there's an argument to be made that an artist could make more money from people buying high-quality editions of their work to pirate than from people streaming their music.
I believe that DC++ file sharing had a model like that, but the torrents outcompeted it for most purposes; most people don't want friend-of-a-friend filesharing, they either want the most effective way to get the latest goods from whoever has it, or the ability to upload stuff while remaining mostly anonymous.
Minor nitpick, Direct Connect (DC) was the protocol, DC++ was merely a popular client but by no means the only client. It wasn't even the official client.
Direct Connect really excelled at college campuses. All the traffic was internal so it was blazingly fast plus the hub had a chat feature which is incredibly useful (pre-reddit) and sometimes people would share old tests and stuff.
I think the biggest motivation for peer-to-peer networks was the lack of legal options for digitally accessing movies and music. The current streaming services and online stores have reduced this ache.
I agree. In fact, I started to realize I've been using sites like The Pirate Bay just to get a sense of what people are watching, new releases etc. Ratings would tend to be very low, of course, it's their culture to say that any movie sucks; but at least I can get a list of titles and then search for them in paid service providers.
Peer-to-peer was killed by the advent of home routers doing NAT, making arbitrary inbound connections impossible. Configuring port forwarding was generally too difficult for "normal users", the web-UIs offered by typical home routers didn't make it easy enough.
NAT traversal methods exist, but from my experience they are generally unreliable and/or slow.
I still remember the days when P2P connections over the internet "just worked, always" (the early Napster/Kazaa days), and then it went to "it works sometimes", and finally to "almost never works (except if the service offers some kind of middle-man datacenter to bounce the connections between the two NAT'd peers)".
Try to do a IRC DCC file transfer with anyone these days and see what I mean... long ago, it "just worked".
I blame today's unfortunate centralization of internet services largely on NAT breaking P2P connectivity. Who knows, maybe spread of IPv6 will fix this mess... some day...
IIRC, consumer wifi routers supposedly support a protocol that allows a computer to request a temporary port forward and release it when its finished using it.
Bittorrent is good enough and there aren't really any law enforcement concerns any more. Pre 9/11 FBI and other agencies were actually doing prosecutions against people for unlicensed content distribution but after 9/11 they decided to focus on higher priorities so nobody really cares about security any more.
A warning letter from the ISP is not quite the same threat as a few years in a fed pen.
In Germany, it is currently quite dangerous to use Bittorrent illegally. There is a high chance of getting an "Abmahnung", which costs around 600-900 euros.
Using sharehosters with for-pay premium accounts is popular, as is using VPNs with bittorrent. The alternative is low-quality streaming.
I think the content industry isn't pushing for eradicating illegal downloads, they just want it to be at least as expensive, or more inconvenient than the legal options. (Which are quite good by now, if the shows you want to watch are available from your provider.)
> was convinced the next step would be friend-of-a-friend sharing. You only share files with your direct friends. But they can pass on these files automatically, so their friends can get the files, too. The friends of your friend don't know that the file is originally coming from you, it is completely transparent. The goal is to not let anybody you don't trust know what you are down- or uploading.
I think freenet is very different. I know freenet from back in the 2000s. It is a platform to decentrally, anonymously store data. At some point, they did add a friend-to-friend option I think (which is what used to be called "darknet" - now that refers to sites like silk road, but back then it meant closed P2P like WASTE). The closest analogue would be Tor hidden services.
What I envision is more like a modern version of Kazaa. The experience would be like iTunes or Spotify. Select your media folder, give it access to your contact list, and voila, "Pirate iTunes". The closest current analogue would be RetroShare.
I used to work for an on-demand content streaming company, so I'm biased, but the cataloging, searching, and quality of content are big scaling issues for the average user.
I don't know if this has a name, but... ICQ-like "searchable" instant messaging.
Let me explain myself. In the late 90s, there was an IM client called ICQ. For starters, its UI was leaps and bounds ahead of anything to be seen in the next decade, and it had functionality like bots or groups, that has again become widespread a few years ago.
But what I'm talking about is the fact that I could look for "female, age between 16 and 20, who likes writing, RPGs and electronic music and is available for chat" and results would come up. You could enter your data, interests, hobbies, etc. in the program and mark yourself as available (only if you wanted) and it was a really nice way to meet people. I collected stamps at the time so I remember I would search for people who were stamp collectors in countries for which I had no stamp, leading to stamp exchanges apart from some interesting conversation (as you could also throw some of your interests in in the search window).
I thought that was the future and the tech could only get better from there, but then came MSN Messenger, with really bare-bones features in general and in particular no "searchable" functionality, and displaced ICQ. And since then, nothing similar has appeared. Instant messengers are focused on talking to people you already know, and if you want to meet new people, you have to use specific-purpose communities or dating sites. But good luck finding someone who likes writing, RPGs and electronic music at the same time... That function of searching people by interests in a huge directory of people (not restricted, e.g., to a dating site) is what I thought would take the world by storm, and as far as I know it doesn't even exist anymore at enough scale to be meaningful, at least in the West (maybe in WeChat, QQ or one of those apps used in Asia they have something similar, I don't know).
Right, Skype had a similar interface in 2005-2010, you could set your status to "Skype Me" (in addition to "Available", "Away" etc.) which was the green "available" Skype icon but with a smiley overlaid. You could search for people by age, gender, location, and I think keywords, and "only Skype Me users" (also implying online at that time). I'm not sure if the interest searching was as advanced as you described but the feature was great. You could then chat to the people you found, or call them, etc.
After a few years, setting your status to "Skype me" meant every few minutes a chat from a new person advertising porn sites or other scams. So I stopped using it. Presumably so did everyone else; at some point they disabled the feature. And now Skype is just a tool to talk to people you already know.
I thought the same thing too. It's quite sad that talking to people is usually limited to dating, and dating has become a domain of Tinder.
I remember an era when chat rooms like IRC had thousands of people online at a time. It didn't scale past this, but it was very exciting.
Maybe there is a market for something like a real-time Reddit, but it's a lot harder to pull off because there needed to be many people on at the same time before it could but critical mass.
There's definitely a huge demand for less dating-oriented ways to meet people online.
> something like a real-time Reddit
Reddit did that with Robin, their April's Fools Day experiment two years ago. It randomly connected small groups of the participating redditors in chatrooms, and would gradually merge the chatrooms if the majority of the users voted to do so. It was a lot of fun while it lasted, but I don't think anyone managed to successfully replicate it since.
I guess it is because we stay online 24 hours now but in the past connected time was a scarce commodity. I felt bad if whenever I found that I was online but wasn't connected to my IRC channels, losing all the history :(
I think as others have mentioned, spam is a big problem, but the more fundamental problem with this idea is supply/demand. For a sufficiently large system, the number of people that are interested in talking to person X about topic Y isn't likely to be well matched with the number of people person X is interested in talking about topic Y with.
Dating sites, while not what you want, are very illustrative. On open messaging platforms like OkCupid, in which anyone can message anyone, women are barraged with messages from men that want to talk about sex. How do you filter them out? If you do manage to filter them out, how do you surface a reasonable number of good conversation matches?
The only solution I've seen in this thread is offering opt-out. But all that's doing is offering people a way to not use a product, it's not fixing the problem.
Even relatively well funded and mature dating sites solve this by either offering finicky filter options to users, or really blunt filters (women have to initiate the conversation).
I don't get this. Dating apps are incredibly popular, and it would be opt-in. If it's the age bucket you're concerned with, I met some of my best friends online during that time (female and male). Adolescence is just like that. I understand that harassment, intimidation and general annoying behavior are a concern. But, if we reached the phase where "two random people can't have a normal conversation if they both wish to" is the default stance then we all need to take a good hard look at ourselves, technology is not going to solve that one.
You only got found if you explicitly marked yourself as available for a chat, which was an opt-in choice that could be easily toggled on and off, or left off all the time if you just wanted to use ICQ to chat with your friends as current IMs are used. And you could omit information, e.g. you could fill in your hobbies but not gender if you didn't want to be found by gender. And the privacy settings were much more understandable and transparent than in e.g. Facebook.
I guess there would be creeps, as anywhere where you can be exposed to unknown people, but I don't think the problem was worse than elsewhere. In fact, in my later experiences in chats, dating sites, etc., most women seemed to be (understandably) on the defensive due to the amount of stalkers and creeps they had to endure regularly, and in ICQ it didn't seem so much that way, I got plenty of conversations, girls (and guys) giving me their address for exchanging stamps or letters, etc., which I found much harder later in other media. Although maybe it's that both the Internet and myself were young and naive back then.
Anyway, if stalkers did become a big problem, surely the tech could have improved to deal with it.
There are probably several factors, but I think the main reason it was abandoned is simply network effects. MSN Messenger rose for reasons orthogonal to this, and then no one of the main IMs that succeeded it implemented this. Sadly, the question of what IMs gain more traction depends more on network effects than actual quality (see Whatsapp vs. Telegram, etc.)
I mainly blame Facebook for this, but these days it's now viewed as seedy and odd to try and be friends with strangers across the internet. Each new 'social service' that now appears is not only silo-ed by it's own protocol, but also promotes the continued siloing of users into their own (existing) friendship groups.
Back in the day (yeah, I'm old), you could drop into an IRC channel of strangers and be having a good conversation within minutes[1]. ICQ, as mentioned, had a 'find friend' feature, and many other early social platforms encouraged like-minded strangers to interact.
So we all end up just talking to the same 20 or people we've always spoken to, re-enforcing the echo-chamber-effect, and never introducing new ideas and viewpoints into our lives.
---
[1] IRC these days, if you are lucky to find an active channel, is now so cliquéy that any interlopers are largely ignored.
You make a good point that I haven't really thought of (in quite that way). Anonymity used to be really fun on the internet in the early days for me, meeting strangers, playing games or whatever. Now so much is tied to us directly and to real-life that the thrill of exploration is kind of gone from the internet to me.
Definitely agree with your point about it being odd to get in touch with perfect strangers, but it's a great tool for promoting acquaintances to friends. I recently had a small house party and organized a ski trip, and Facebook allowed me to very easily invite people I didn't very well. It can be a great low pressure, non-intrusive, but genuine way to try and bring people into something.
Facebook Graph search was like this (and creepier as people didnt always know _how_ they are searchable) nur Facebook heavily crippled this in the Last years.
Graph Search always surfaced relevant information that explained why they matched a search. For example “Friends who visited London” might say “Checked in at Tower Bridge” below a result.
You’re right that people didn’t understand Facebook’s privacy model. People assume that privacy by obscurity is a guarantee. Never mind that anyone could write a bot that did what Graph Search did. Graph Search was actually more conservative than it needed to be to avoid surprises.
In the end that all mattered less than the fact that it wasn’t solving a problem people had.
Interesting. I haven't even tried Facebook Graph search as I think it never even became available in my country. But I think the main difference is probably that in Facebook, it's generally not socially OK to contact strangers. In ICQ, you could set and communicate explicitly if you wanted to be contacted by strangers or not (and if not, you weren't searchable).
I remember when Facebook hyped the Graph search as the next big thing, a truly social search engine. Years later I only occasionally remember about its existence when trying to find someone/something specific on Facebook, and even then it's at most mildly helpful.
How did you "meet" people on ICQ, was there a discovery functionality? I used it too, but only for chatting with people I knew and whose account ids were known to me..
one thing people still don't remember from ICQ was the ability to collaborate / live message. Ie. edit text in a shared environment. It was bonkers. You could be writing and someone else could change the font options for it. and type at the same time, even delete your messages.
Maybe I'm just a nostalgic old geezer, but that was kind of socialising was my favourite thing about the internet. It's a shame I haven't been able to experience it in over a decade.
It was so incredibly easy to find and talk to new people. It seems like everything about the current internet makes that deliberately harder. Even sending e-mail to strangers is hard now, you'll likely end up in their spam folder.
I am certain we could easily create this using websockets and p2p browser data streams... whos coming with me? Crowd fund me and i will release a prototype
OK, I only ask for 10% of the profits for giving you the idea :)
Just kidding. I would definitely support such a crowdfunding within my modest means. The problem is that I don't think the tech is the main issue (after all, ICQ did it with 90s tehc). Obtaining an userbase is the main issue, and network effects go against new contenders.
E-Ink. It is the "correct" choice for display technology, and with enough research money put into it, it could replace these abominable light-emitting displays. But what we already have is "good enough", despite all of the hidden costs, and so we're stuck with it.
100% agree. imagine my shock when as a young technology analyst I discovered that E-Ink was a smallcap tech company in Taiwan and basically produced kindle displays, store price tags and that cool double sided phone that one time. I thought it was a huge deal when they managed to do color E-Ink. but no one cared.
You can buy 3-color (white, black, and read all over) ones for ... I want to say about 20 yuan which is like $3?
I haven't looked into driving them yet; that's step 3 and I'm still on step 1 (monochrome OLEDs) but I think you might have to do some funky temperature adjustment stuff to get good results out of them...maybe that hasn't quite been integrated into the display controller chips yet?
I love E-Ink display. Fell in love with it when I bought my first Kindle. I could read all day and my eyes won't be tired at all.
Since then I had been waiting for E-Ink based laptop for work. It doesn't need to play videos or even display color. As a backend developer, I can live with grey scale display. It doesn't need high refresh rates either.
But now almost 10 years later, it doesn't look like this will ever happen.
I arrive a bit late to the party. I have the same wish and though about startup it. So I digged around last year
> But now almost 10 years later, it doesn't look like this will ever happen
Some Chinese are trying to do that already : Dasung-Ink-Paperlike-13-3-Monitor
There are reviews around the web about it and you can buy one on Amazon. I have not do it yet for the price is still high at 1200$
I also met with a great manager from the Taiwan Eink company. Eink technologies (for they have several similar ones) are mechanical based which implies many constraints.
For the stories the MIT guys who develop it spend around 20 years on it before passing the baby on to the Tawain based company.
Except Amazon, most big companies are not pushing it, I guess because pictures and videos are so prevalent nowadays.
The Kindle and it's e-reader kin are pretty successful?
I think the problem is that a device with an e-ink display is necessarily single purpose. General purpose devices can't use e-ink, they need to support video playback and browsers with full colour.
I'd love an e-ink dashboard in my car so it's not so glary, e-ink tablet that does a bit more than a Kindle but isn't a full media player (super thin super light with basic browser for wiki, news, rss, and email), ~20" e-ink screen on the inside of my apartment door with calendar and notifications and todolist, there's so many cool things we could do with the tech if we could produce it cheaper and bigger. Not everything needs to be able to play video.
Who knows how much faster this could get with enough investment? Unless there's some fundamental physical limitation to the update rate that I'm not aware of.
Imagine a table with an e-ink display that can show you whatever you want them to show you.
Imagine reading news above your sink while doing the dishes.
Imagine the entire walls constructed with e-ink displays in your home and being able to change the wallpapers in your living room depending on what's currently in your mind.
All of those could have a "read-only" mode until you push a button, they would use pretty much no electricity, and would ideally be water-proof.
Instead, what I have right now, is an e-book reader. Which is nice and all. I'm often sending articles to my Kindle (kudos https://p2k.co/) and all sorts of things, but I had much higher hopes from e-ink technology (and still kind of do). The hope of having my home filled with e-ink displays is much greater than having a home filled with sensors.
Many years ago, I heard a politician give a speech, and he used a phrase there (I am translating from German, so it might sound a little clumsy): The worst enemy of Good is Better.
Over time, I have come to believe that the worst enemy of Better is "Good Enough".
A comparable English expression is “the perfect is the enemy of the good”, which I think every software engineer should have engraved on to their monitors as a reminder.
I'm out of date. Does E-ink still have ~500ms latency of refreshing the screen? If so, that's the #1 reason I believed it wouldn't be successful when I used them 6 years ago.
E-Ink is a technology that couldn't be developed well enough, it just never got there. I mean, I love my Kindle paperwhite, but without color and rapid (60hz+) refresh, it won't ever be quite good enough. It's a fascinating case of a technology path that fizzles out.
E-Ink never took off because once it reached a profitable niche, it stopped being developed. And it's patent encumbered. I expect E-Ink development will resume around 2030 or so if it first appeared in 2010.
For me it was transflective LCD. Very low power compared to IPS panels, high contrast in direct light. Not quite as low power as e-ink, but it looked like e-ink and had a high refresh rate.
I wish I could get a programmable e-ink display at a reasonable price, that I could mount on the wall or desk. To show quotes, photos, weather, calendar, anything that is nice to have passively updated in the environment, using low power, and not emitting its own light.
This is something I would love to have. The possibilities are endless. Status screens, quotes, project progress, server health, etc. I really wish something like this existed that I could use to show some basic info that would be nice to have a screen at home, but not so nice a monitor is even a remote possibility.
I think it was one of the technologies that are actually hard. So they got where we are with low/medium effort and expenses but unwilling to go further because they can't or don't have money. I suspect color eink tanked because of this.
Transflective LCDs are another tech that didn't go anywhere. It lets you use LCDs outdoors by reflecting the natural light, which solves the problems of color and video. There were even models with solar cells embedded, so not only you get longer battery life but recharge in the sun.
Paypal used to have this feature that allowed you to install a browser add-on and you could generate a CC number on the fly that was good for either one time use or recurring use (for subscription services). This feature served two primary purposes:
1) to be able to pay using PayPal on sites that didn't support it
2) to help prevent against fraud which was becoming a massive problem at the time. If the number was stolen, it immediately wasn't good anymore and a hacker/thief could not use the CC number to purchase/steal anything.
It was that second aspect that I thought would totally eliminate all credit card fraud and make people comfortable with online purchases on smaller sites. I have no idea why PayPal killed the program, but even before it did, not many people used it. I was the only person I knew that was even aware it existed.
I don't think its too tin-foily to assume that there are non-technical reasons why the credit card networks don't want one-time-use credit card numbers, and that PayPal would care more about its relationships with those networks than it does for a product that didn't immediately take off.
This is how the innovater's dilemma works. Big entrenched company, too scared to make changes that will jeopardize existing partnerships and businesses, upended by a nimbler competitor that doesn't have to care about those things. It'll happen!
The one-time PAN patents (and the merchant or tx-bound PAN) patents of the late '90s are largely expired at this point, so it is open art now. I see more and more companies starting to implement it more broadly (Citi, BofA, CapitalOne). It's nifty because you don't have the hefty "Verified by Visa" type integration (nor any of that SET stuff also from the 90s).
The last time I logged into my PayPal pre-paid debit card portal, they still had this functionality (sans Browser plug-in), but I don't recall seeing it on PayPal proper for a while...
The last time I talked to the MC folks (granted, it has been a long while), they actually thought it (OTP) was a nifty client-side (plus closed-loop) technique and was a nice (and orthogonal) add-on to the types of security that they are pushing vis-a-vis tokenization on the merchant side...
Yeah but in PayPal's case, they were all MasterCard numbers. And keep in mind that since one of the primary uses was to be able to Pay secretly with PayPal on a site that doesn't support it, the number would have to validate through the merchant's existing CC system. MasterCard was clearly on board with the process.
Too bad it requires an invite code. Do you have one for us? :)
I use privacy.com for something similar, but it's a debit card (so it just connects to your bank account) rather than a credit card and doesn't offer any rewards.
the problem is you have to tie a debit account to this, right? Why can't I tie another credit card as a source of funding to this, or Paypal or some other stream? The less organizations that have my debit information, the better.
This tech is available and extremely well-implemented via Final Card. I use it and have numbers stored for probably 50 sites/services/etc. Anything where I put a card in online.
So next time my card gets compromised because I used it in a restaurant (seems to happen regularly) then none of those have to be reset. Just get a new "physical" card and go on my merry way. Many hours saved.
It's not just PayPal. I think Discover and American Express both had that feature but killed it off. Bank of America still has it and (IIRC) Chase does too.
On the Quora page they explained the main reason behind dropping the product was that fact that users had to install a browser extension to get the CC number, which put people off.
Surely they could have very easily sent the CC details to the users by email or SMS (security risks) or better still they could obtain the details by logging into their PayPal account.
Seems odds that the execs axed the product over this when they were so many solutions to the problem.
I suspect it had more to do with pushback from card providers as it likely made tracking users more difficult. Same reason some retailers still don’t support Apple Pay.
Both Bank of America and Citi credit cards do it. They have it available right on their websites, and Citi even has an optional desktop application to quickly generate temporary use cards. I use it all the time for shady subscription services, and also to sign up for "New User" promos multiple times :)
I really liked this about my BOA card. I don't use the card any more because the rewards suck but used to make one time numbers if I didn't have my wallet handy.
I think Privacy.com (no personal relation) offers a similar service now, though you've got to trust them with access to your money at some point in the process.
Edit: Since I started writing this comment others posted the same one - and pointed out it's not really a credit card.
In Portugal we have a company that provides a service that is very similar to what you described (creation of virtual CCs which can only be used 1 time, or X times by one retailer, for subscriptions). The service is called MBway (previously MBnet) and it works very well.
I experienced this 1 week ago. Imagine you used your one and only debit card for a subscription service. The billing system of this subscription service went mad. Guess what: To stop this subscription madness you can only block your whole card. On top of that you have stress to get a new one from your bank and you are several days without a debit card. Or imagine somebody does a fraudulent charge... when it's large and the limit is not big (or connected to your account with a debit card) the money is first lost. You practically loose time where you cannot use that account/card anymore.
F#. Back in 2006 I first stumbled upon it and was amazed. It had so much potential. Everything C# did, and more, better. I was sure we'd see 20% of MS devs moving to it.
I underestimated the momentum of MS, the power of embarrassment of hiring a high-profile figure to be shown up by a researcher, the incredible anti-FP and even anti-generics...resentment(?) that MS kept towards them. Plus the insane comments from actual developers that literally did not understand the basics of C# ("C is a subset of C#" and "var is dynamic typing" <- comments from a public high-profile MS hire).
I've basically given up hope on programming becoming better over time. A lot of apps are boring grunt work anyway, so the edge in using better tools can be beaten just by throwing a lot of sub-par people at it.
On the plus side, for people looking to strike it rich in "tech", knowing tech isn't really a prerequisite. Persistence and the 'hacker' spirit, even if it means you spend all night writing something in PHP that would literally be one line if you know what you were doing, hey, that's what leads to big exits.
I feel it's more a case of functional programming trying to target the wrong segment - albeit out of necessity.
There's two kinds of programs being made. First most common one is the kind you're talking about here: grunt work. It's not about the code, but more about having the code do a straight forward task with very flexible constraints. A web app that talks to a database and does a few simple transformations -- like forum software or a todo app.
The second type is the interesting one: actual difficult code that does something unique and difficult and can take years to write by very experienced developers. Usually the difficult part here is coming up with the correct algorithm and then applying it, often with performance considerations being extremely important as the code is doing a lot of work. An impressive new MMORPG game, a new rendering technique, complex simulations, control code for rockets or advanced batteries.
The big program with functional programming is that it's never been positioned as a solution to the second type. Generally people trying the second type are told to use C, C++ or recently, Rust. Functional programming is marketed as making the first type "better" because trying to market it as being more efficient than C/C++ has not worked as people rely on micro benchmarks for these decisions. But this falls apart with the argument you gave: for the first type, it's better to just hire more junior programmers as there's nothing really difficult involved. And using a functional language makes it extremely difficult to hire junior programmers, because functional languages are aimed at advanced programmers.
Don't give up. I see FP growing every day. I saw a Haskell job listing yesterday for $180k/yrly. These skills are valuable and becoming more valued. I think the Xamarin purchase bodes well for F# which I have an intention of learning. My understanding is it is more popular in Europe btw.
F# was the only reason I wanted to dip into .NET. I really expected it to take the development world by storm because it was introduced when FP was getting a lot of attention.
BeOS . That was the best os at the time by far, stellar performance, fantastic C++ api usable by a newbie, i still haven’t found a GUI as responsive as this one.
DVD-audio. Great multi channel, high rate, high resolution audio. But it required the whole production chain to upgrade as well as new consumer equipement, and they competed with sony own format ( which was also a failure)... But it would have given the record industry a few more years of revenue before the bandwidth would have been sufficient to download or stream music with this resolution.
I'm not surprised DVD-Audio and co failed. Their quality is indistinguishable from CDs despite what audiophiles would have you believe, and by the time they came out MP3 was clearly the future.
> Their quality is indistinguishable from CDs despite what audiophiles would have you believe
Well I blame the system. You needed upgraded headphones or speakers. I have $2000 studio monitors when I was an audio engineer and well I can 100% tell you in a blind test I can tell the difference.
95% of the reason why is people don't care about quality audio. It's something about human brains. MP3 sound horrible compared to a FLAC and good equipment. Instead people wear Beats Bluetooth Headphones listening to streamed Audio.
I also blame myself. I use $15 in ear bluetooth ear piece (Looks like a hearing aid) Inovate G10. I listen almost exclusively to podcasts or YouTube videos. The convenience is so much more needed than sound quality. I listen to music at home (When my kids aren't around because all they do is complain). I need to get a good pair of headphones.
even if you don’t believe in very high quality audio like me, the fact is we’re listening to stereo music, but viewing movies on 5.1 systems. This would have been different with dvd-a. I would have loved to listen to a live concert cd with surround sound.
+1. Even if they could be distinguished, it would be only in top HiFi setups. Most people (me included) listened with shitty car stereos, portable players with terrible DACs and cheap headphones.
Similarly, Windows 10 Mobile didn't take off either. Before that, even Windows Phone became a failure.
I was really excited when I got my Lumia 920 [0]: The OS was just so solid, sleek, and fast. Too bad Microsoft didn't put adequate effort into it. It's as if they couldn't care less about this OS.
[0]: Not to mention the great, solid design of the hardware by Nokia.
BeOS has extreme task scheduling to give smooth UI, for example if you run too many programs at once, BeOS network connection has noticeable stutter even if your UI seems slick enough
The general-purpose computer. It's a dinosaur on the verge of extinction at the hands of walled-garden app stores on phones and tablets. I never, ever expected that to happen.
Given how cheap even powerful general purpose computers are I don't think they ever will be prized beyond availability. Aficionados, internet stores, arm, Linux, market forces, etc.
To some part, good riddance. Software engineers and product designers are the reason the PC is effectively broken by design in UX. Because they are inexcusably slow. There is 0 reason anything I want to do with e.g. office would not have 0 wait time except non-user centric design. My friggin circa 1984 Mac Plus felt faster on most stuff than the generic desktop PC.
The decade when Moores law gave freebies to software is probably one of the reasons for this astronomical sluggishness. Application feels slow? Well, just wait a year and get a new faster CPU. No need to spend resources on optimization.
I don't know who started with the idea that it's smarter to buy more hardware than to spend resources on programming but if I had to guess, it were the mainframe vendors. Cool for embarrassingly parallellizible batch processing jobs for massive bureaucracies, not so much for desktops.
And don't get me started on the idiocy of thinking it's fine software developers don't need to develop domain understanding in the field they operate.
With the general purpose computer desktop development, we have a system that's broken on so, so many levels.
At least Os X seems to try to do the right thing at an attempt at fluency and I can get a linux desktop to operate smoothly. Even windows 10 starts fast but O brother and sister of the clunkyness of software.
Despite what I said, I think the situation is improving. Immediacy in handheld devices puts pressure on the desktop to concentrate on not wasting the users time as well.
No way and NO HOW! That statement is 100% false. The issue is you remember that it felt fast. My Amiga felt like lightening to me in 1985 and I still boot it up from time to time. The delays in everything is over the top and Mac 128k were horribly slow.
> My friggin circa 1984 Mac Plus felt faster on most stuff than the generic desktop PC
You do have some merit to the argument, but your 1984 Mac didn't have half as "pretty" UI, and that matters to some people. It didn't do dynamic language processing to figure out if you've made a grammatical error. It did not have to continuously scan every data stream for malware. It did not have to use a significantly restricted environment to limit damage when some malware did get through. It didn't even support multitasking. My point is that computers today do so much more, but responsive UI is entirely possible for desktop applications, but the development cost is more than most companies/people are willing to pay.
> The decade when Moores law gave freebies to software is probably one of the reasons for this astronomical sluggishness. Application feels slow? Well, just wait a year and get a new faster CPU. No need to spend resources on optimization.
I'd like to see a word 1.0 coded with current tech stacks. Same functionalities. And see how fast it feels.
I think this feeling is a combination of a lot of things taken for granted and nostalgia.
>I don't know who started with the idea that it's smarter to buy more hardware than to spend resources on programming
It was the CFO's idea. Software is expensive, and hardware is _incredibly cheap_.
Moreover, if you're in the business of selling hardware, be it PCs, mobile phones, white goods, whatever, it is in your power to throw cheap hardware at the problem, because software development will always be one of your highest costs. You will never be able to just drop readymade code into your new hardware design. And you can more easily recoup the cost of hardware development because you can mark up the price of anything with a faster CPU or more memory accordingly; you can't so easily charge for software features.
Eventually at some point, you have to get some work done, and you just can't do that with a tablet screen or a touch interface. The mouse and keyboard will reign, and even if the concept of a non-sanboxed system goes away, the future will still have hardware which look like desktop computers and laptops. The OS filesystem won't go away because applications on these personal computers need to be able to talk to each other somehow, and the current iOS method of "launch a file in this application from this other application" is too limiting. The only thing that will change is the application packaging, which will become optionally centralized and have permission levels just like Android apps. I think overall, future computers will be just as pleasant to use, just as programmable/configurable, and just as easy to download software from decentralized sources.
Just think, Chrome OS has essentially failed to take over the OS market, so I doubt anything using this overly-sandboxed model will be successful in the future.
> Just think, Chrome OS has essentially failed to take over the OS market, so I doubt anything using this overly-sandboxed model will be successful in the future.
But that's the thing, there's no "os market" for appliances. ChromeOS won't 'take over' computing devices any more than toasters will take over your kitchen.
Did people really think the arms race of heuristic malware detection in bolt-on security software would be be sustainable?
Sandboxing applications and ending the free-for-all over the user’s entire file system were a long time coming. Then spammy software download websites and “Download Now” banner ads bigger than the official download buttons for popular software made it obvious that we needed a trustworthy distribution center. Of course someone starting an OS in the mid 2000s wasn’t going to adopt the Windows shareware distribution model.
The Linux distribution model of a vetted, quality controlled repository of software from a trusted middleman was so obviously superior, of course it won. I guess it’s surprising that iOS got so far with no escape hatch, but Android is the far more popular platform and has always let more technical users peek over the walls if they so desire.
I’m more surprised that so many people are content to type on glass. I was sure that the blackberry keyboard was the future.
I'm not super a fan of typing on glass, especially when next-keystroke prediction is as poor as iOS has been lately managing. I am super a fan of having a device with the size of display that eliminating a physical input device achieves.
My old Palm TX did the same thing, showing the graffiti area when a text input had focus and otherwise giving applications the use of all but a status bar's worth of glorious 320x480 screen space. Granted that Palm OS did less to help applications really leverage the extra space than iOS does. But most of what I did on that device was reading anyway.
Of course, Graffiti via stylus suffers less from a soft UI than keyboard input via touch, so it's not quite apples to apples. Still, though, as annoying as a soft keyboard so frequently can be, I wouldn't give up half my phone's display for a hard one.
That's profoundly disappointing, I agree, but in a sense perhaps we geeks only have ourselves to blame. We didn't solve the basic (to any normal user) problems of ease of use and security, despite having decades to do so. We also didn't start routinely educating the next generation of kids in how useful and powerful programming skills are. Consequently, for many people personal computing has been reduced to little more than a mechanism for consuming online content and for relatively simple communications using a few trusted channels.
The one comfort is that someone still needs to be creating all that content and all those communications channels, and those people are always going to benefit from something much more capable than a small, lock-down touchscreen+WiFi device.
>We didn't solve the basic (to any normal user) problems of ease of use and security
This user is a strawman. The problem we didn't solve was decoupling ourselves from corporations controlling our user experience, because we didn't create a good enough alternative for getting stuff done.
> The one comfort is that someone still needs to be creating all that content and all those communications channels, and those people are always going to benefit from something much more capable than a small, lock-down touchscreen+WiFi device.
Maybe. Let's see what the next generation of visual programming and IDE UX bring.
This really destroyed my day. I really have a big problem staying positive for at least a day now that you've eloquently expressed what I've had a creeping feeling of but just quite couldn't word in any discussion.
As computers become "locked app collections" they become appliances, appliances business and government institutions can control and regulate.
"Can you make the iPad just not do that thing?"
as a web developer i am bemused at every other comment on here that seems to agree with your sentiment that the world is walling up. yes, app stores will be a control mechanism for ever and consumers may not understand what they are opting into. but equally the open web will never go away and that's what general purpose computation has shifted to, thats basically all we've done this century. am i misunderstanding what you mean by general purpose computer?
But the rollout, that was just the worst way to ever get a product into the hands of users. If you got in they gave you some number of invites. You'd set them up to be sent out to your friends or coworkers or whatever. Turned out, it just put them in a queue to eventually get an invite. Wave was fundamentally a collaboration platform, without anyone to collaborate with it had zero value. Fantastic way to fail.
[0] https://drive.googleblog.com/2010/09/whats-different-about-n...
[1] http://blog.atom.io/2017/11/15/code-together-in-real-time-wi...
For 2009 hardware running javascript in a browser, you mean? I was running much fancier native stuff much faster (including 3d games) on low-to-mid tier 2009 hardware.
> So in many ways, it accomplished exactly what to set out to do.
I think it definitely accomplished its goals technically, but it certainly failed on adoption. I personally never got into it. I tried it quite early and didn't like it. It was clunky and confusing and didn't seem very useful to me at the time. I wonder if it were re-created with slicker UX (and good performance), would I like it...
What I want to say is an obvious thing -- stellar team doesn't imply success.
The collaborative document editing in Google Docs improved afterwards, the Google Wave wikipedia entry doesn't mention this but I have some vague recollection of some of their work being migrated into Google Docs to improve it in that regard. That may be completely wrong though since I can't seem to verify it.
But email was already an existing protocol. So invites meant nothing. I could still communicate with my family even though they were still on their ISP account. Or my sister with her university account.
facetious comparison aside, i think the (Yegge)? rant about how Google does things applied here.
There was a time when Napster, Kazaa, eMule were king. The content industry fought against it, but developers came up with decentralized solutions like DHTs or supernodes.
I was convinced the next step would be friend-of-a-friend sharing. You only share files with your direct friends. But they can pass on these files automatically, so their friends can get the files, too. The friends of your friend don't know that the file is originally coming from you, it is completely transparent. The goal is to not let anybody you don't trust know what you are down- or uploading.
I would piggyback on the social graph of existing networks to get started. I actually had a prototype of the friend discovery code, based on XMPP (using custom stanzas). You login with your credentials, and it shows you all your friends that are using the same app. It worked with GTalk, Facebook Messenger, and Skype (via Skype API). One nice feature was that this worked without getting an API key from Facebook etc., or having a central server. I was so pissed when they all changed their APIs to make this impossible. It felt like a conspiracy to stop this use case.
I still think if somebody pulled this off, it might work, and it would be pretty distruptive (in the good and the bad sense of the word). It would be the last word in the debate between "data wants to be free, change society to enable what technology promises" and "data has to be a commodity, restrict technology because in capitalism we can't feed artists otherwise".
You'd set the music folder you wanted to share (there was no opting out) and so long as you had the program open your files were available for download by other users. The program operated like an always on satellite torrent program with very low impact. You'd find a file to download and the file would come in many chunks from any available user who had the file on their client side. Downloads were fast and in the event you lost connection or closed Audiogalaxy the download would resume immediately on reload.
From my POV Audiogalaxy was extremely threatening to Copyright holders in a way that prior p2p programs weren't. Your comment about software being especially 'disruptive' and ruffling all of the right feathers is what reminded me of Audiogalaxy.
There was no indication of who you were receiving files from. There were no usernames or features outside of download and upload. In terms of creating a piece of software that picked a mission and executed it I'll always look at Audiogalaxy and think it achieved precisely what it set out to do, entirely.
At least in 2012-14 Soulseek was very popular with extremely niche and obscure music fans (stuff not even on What.cd!). Not sure how it is now.
This meant that you could search out a whole bunch of files you wanted, and queue them up for downloading, and whenever the sharers came back online the downloads would start and go through. These days, BitTorrent operates the same way.
Back in the day of mytunes on a school network, I had HDs full of mp3s. I actively browsed friends (and complete strangers) libraries to find new things, complete an artists discography... etc.
Now? Spotify Premium is probably the best value/$ of any purchase I make in a month.
Even if I wanted to go back to my old ways... its just not there any more. I believe iTunes patched out the features that made mytunes possible, and people don't have computers full of mp3s discoverable on a network anymore.
The history is a bit different. Napster was sued out of existence. As I recall, this was done by the record labels, but the argument was that file sharing was unfair to artists. Which it was. But that was obviously not the real point. Streaming services are notoriously stingy when it comes to paying artists for their work. So access to music is still easy, and the net change is that new middlemen are profiting wildly.
(Streaming vs. file sharing is a different issue. I want to own the music I listen to. I don't want to be dependent on an internet connection, a server, a revokable license, etc. )
I also LOVE YouTube Red and it helps the content creators multiple of times more then my little ad revenue. How is it that people flaged YouTube Red as anti-content creators and were going after Google with pitch forks and now content creators keep saying that Red Members give them so much more per person. One creator says 1/4 of his income is from YouTube Red (He makes a living on YouTube).
If we are very pessimistic about the royalties of a CD sale - let's call it 20%, and say a single CD costs $10, that's ($10 x 20%) / 0.0084 = 238 streams to cover the cost of a single CD.
If you take audiophile-level paranoia like buying vinyls, SA-CDs or FLAC masters, I think there's an argument to be made that an artist could make more money from people buying high-quality editions of their work to pirate than from people streaming their music.
It's food for thought.
Disclaimer: I am a music student.
[1]: http://www.informationisbeautiful.net/visualizations/how-muc... [2]: https://www.theverge.com/2015/12/7/9861372/spotify-year-in-r...
Direct Connect really excelled at college campuses. All the traffic was internal so it was blazingly fast plus the hub had a chat feature which is incredibly useful (pre-reddit) and sometimes people would share old tests and stuff.
I still remember the days when P2P connections over the internet "just worked, always" (the early Napster/Kazaa days), and then it went to "it works sometimes", and finally to "almost never works (except if the service offers some kind of middle-man datacenter to bounce the connections between the two NAT'd peers)".
Try to do a IRC DCC file transfer with anyone these days and see what I mean... long ago, it "just worked".
I blame today's unfortunate centralization of internet services largely on NAT breaking P2P connectivity. Who knows, maybe spread of IPv6 will fix this mess... some day...
Anyone remember what that protocol is called?
A warning letter from the ISP is not quite the same threat as a few years in a fed pen.
Using sharehosters with for-pay premium accounts is popular, as is using VPNs with bittorrent. The alternative is low-quality streaming.
I think the content industry isn't pushing for eradicating illegal downloads, they just want it to be at least as expensive, or more inconvenient than the legal options. (Which are quite good by now, if the shows you want to watch are available from your provider.)
This very thing exists and readily available for use in form of Freenet https://freenetproject.org/
What I envision is more like a modern version of Kazaa. The experience would be like iTunes or Spotify. Select your media folder, give it access to your contact list, and voila, "Pirate iTunes". The closest current analogue would be RetroShare.
Let me explain myself. In the late 90s, there was an IM client called ICQ. For starters, its UI was leaps and bounds ahead of anything to be seen in the next decade, and it had functionality like bots or groups, that has again become widespread a few years ago.
But what I'm talking about is the fact that I could look for "female, age between 16 and 20, who likes writing, RPGs and electronic music and is available for chat" and results would come up. You could enter your data, interests, hobbies, etc. in the program and mark yourself as available (only if you wanted) and it was a really nice way to meet people. I collected stamps at the time so I remember I would search for people who were stamp collectors in countries for which I had no stamp, leading to stamp exchanges apart from some interesting conversation (as you could also throw some of your interests in in the search window).
I thought that was the future and the tech could only get better from there, but then came MSN Messenger, with really bare-bones features in general and in particular no "searchable" functionality, and displaced ICQ. And since then, nothing similar has appeared. Instant messengers are focused on talking to people you already know, and if you want to meet new people, you have to use specific-purpose communities or dating sites. But good luck finding someone who likes writing, RPGs and electronic music at the same time... That function of searching people by interests in a huge directory of people (not restricted, e.g., to a dating site) is what I thought would take the world by storm, and as far as I know it doesn't even exist anymore at enough scale to be meaningful, at least in the West (maybe in WeChat, QQ or one of those apps used in Asia they have something similar, I don't know).
After a few years, setting your status to "Skype me" meant every few minutes a chat from a new person advertising porn sites or other scams. So I stopped using it. Presumably so did everyone else; at some point they disabled the feature. And now Skype is just a tool to talk to people you already know.
Deleted Comment
I remember an era when chat rooms like IRC had thousands of people online at a time. It didn't scale past this, but it was very exciting.
Maybe there is a market for something like a real-time Reddit, but it's a lot harder to pull off because there needed to be many people on at the same time before it could but critical mass.
> something like a real-time Reddit
Reddit did that with Robin, their April's Fools Day experiment two years ago. It randomly connected small groups of the participating redditors in chatrooms, and would gradually merge the chatrooms if the majority of the users voted to do so. It was a lot of fun while it lasted, but I don't think anyone managed to successfully replicate it since.
https://www.chatroulette.com/
Dating sites, while not what you want, are very illustrative. On open messaging platforms like OkCupid, in which anyone can message anyone, women are barraged with messages from men that want to talk about sex. How do you filter them out? If you do manage to filter them out, how do you surface a reasonable number of good conversation matches?
The only solution I've seen in this thread is offering opt-out. But all that's doing is offering people a way to not use a product, it's not fixing the problem.
Even relatively well funded and mature dating sites solve this by either offering finicky filter options to users, or really blunt filters (women have to initiate the conversation).
This is probably the reason it was abandoned.
I guess there would be creeps, as anywhere where you can be exposed to unknown people, but I don't think the problem was worse than elsewhere. In fact, in my later experiences in chats, dating sites, etc., most women seemed to be (understandably) on the defensive due to the amount of stalkers and creeps they had to endure regularly, and in ICQ it didn't seem so much that way, I got plenty of conversations, girls (and guys) giving me their address for exchanging stamps or letters, etc., which I found much harder later in other media. Although maybe it's that both the Internet and myself were young and naive back then.
Anyway, if stalkers did become a big problem, surely the tech could have improved to deal with it.
There are probably several factors, but I think the main reason it was abandoned is simply network effects. MSN Messenger rose for reasons orthogonal to this, and then no one of the main IMs that succeeded it implemented this. Sadly, the question of what IMs gain more traction depends more on network effects than actual quality (see Whatsapp vs. Telegram, etc.)
Back in the day (yeah, I'm old), you could drop into an IRC channel of strangers and be having a good conversation within minutes[1]. ICQ, as mentioned, had a 'find friend' feature, and many other early social platforms encouraged like-minded strangers to interact.
So we all end up just talking to the same 20 or people we've always spoken to, re-enforcing the echo-chamber-effect, and never introducing new ideas and viewpoints into our lives.
---
[1] IRC these days, if you are lucky to find an active channel, is now so cliquéy that any interlopers are largely ignored.
Try http://graph.tips/
You’re right that people didn’t understand Facebook’s privacy model. People assume that privacy by obscurity is a guarantee. Never mind that anyone could write a bot that did what Graph Search did. Graph Search was actually more conservative than it needed to be to avoid surprises.
In the end that all mattered less than the fact that it wasn’t solving a problem people had.
I loved it. And this was back in 2000 maybe.
I'm sorry to hear that.
It was so incredibly easy to find and talk to new people. It seems like everything about the current internet makes that deliberately harder. Even sending e-mail to strangers is hard now, you'll likely end up in their spam folder.
Just kidding. I would definitely support such a crowdfunding within my modest means. The problem is that I don't think the tech is the main issue (after all, ICQ did it with 90s tehc). Obtaining an userbase is the main issue, and network effects go against new contenders.
Too many people named Dan.
"females who are single, age between 16 and 20, who likes writing" would have returned precise results.
I met a guy to play Dota 2 with in 2012 and we've been friends since.
I haven't looked into driving them yet; that's step 3 and I'm still on step 1 (monochrome OLEDs) but I think you might have to do some funky temperature adjustment stuff to get good results out of them...maybe that hasn't quite been integrated into the display controller chips yet?
Since then I had been waiting for E-Ink based laptop for work. It doesn't need to play videos or even display color. As a backend developer, I can live with grey scale display. It doesn't need high refresh rates either.
But now almost 10 years later, it doesn't look like this will ever happen.
> But now almost 10 years later, it doesn't look like this will ever happen
Some Chinese are trying to do that already : Dasung-Ink-Paperlike-13-3-Monitor
There are reviews around the web about it and you can buy one on Amazon. I have not do it yet for the price is still high at 1200$
I also met with a great manager from the Taiwan Eink company. Eink technologies (for they have several similar ones) are mechanical based which implies many constraints.
For the stories the MIT guys who develop it spend around 20 years on it before passing the baby on to the Tawain based company.
Except Amazon, most big companies are not pushing it, I guess because pictures and videos are so prevalent nowadays.
I think the problem is that a device with an e-ink display is necessarily single purpose. General purpose devices can't use e-ink, they need to support video playback and browsers with full colour.
There are already E-Ink displays with much a faster update rate: https://www.youtube.com/watch?v=wsY3T1uzjAI
Who knows how much faster this could get with enough investment? Unless there's some fundamental physical limitation to the update rate that I'm not aware of.
Imagine reading news above your sink while doing the dishes.
Imagine the entire walls constructed with e-ink displays in your home and being able to change the wallpapers in your living room depending on what's currently in your mind.
All of those could have a "read-only" mode until you push a button, they would use pretty much no electricity, and would ideally be water-proof.
Instead, what I have right now, is an e-book reader. Which is nice and all. I'm often sending articles to my Kindle (kudos https://p2k.co/) and all sorts of things, but I had much higher hopes from e-ink technology (and still kind of do). The hope of having my home filled with e-ink displays is much greater than having a home filled with sensors.
Over time, I have come to believe that the worst enemy of Better is "Good Enough".
What's keeping them from being more popular is the price, anything over a 1" e-ink display costs a literal arm and a leg.
And I don't know why everyone seems to think they aren't successful. Amazon sells a ton of Kindles. They're very successful.
It was that second aspect that I thought would totally eliminate all credit card fraud and make people comfortable with online purchases on smaller sites. I have no idea why PayPal killed the program, but even before it did, not many people used it. I was the only person I knew that was even aware it existed.
EDIT - if anyone is curious, I looked it up. Two ex-PayPal employees explain here: https://www.quora.com/Why-did-PayPal-discontinue-their-one-t...
This is how the innovater's dilemma works. Big entrenched company, too scared to make changes that will jeopardize existing partnerships and businesses, upended by a nimbler competitor that doesn't have to care about those things. It'll happen!
The last time I logged into my PayPal pre-paid debit card portal, they still had this functionality (sans Browser plug-in), but I don't recall seeing it on PayPal proper for a while...
The last time I talked to the MC folks (granted, it has been a long while), they actually thought it (OTP) was a nifty client-side (plus closed-loop) technique and was a nice (and orthogonal) add-on to the types of security that they are pushing vis-a-vis tokenization on the merchant side...
We took a hard deep look at a massive stagnant industry (credit cards) and use experience and features as differentiators
I use privacy.com for something similar, but it's a debit card (so it just connects to your bank account) rather than a credit card and doesn't offer any rewards.
So next time my card gets compromised because I used it in a restaurant (seems to happen regularly) then none of those have to be reset. Just get a new "physical" card and go on my merry way. Many hours saved.
https://getfinal.com/
Deleted Comment
Dead Comment
Surely they could have very easily sent the CC details to the users by email or SMS (security risks) or better still they could obtain the details by logging into their PayPal account.
Seems odds that the execs axed the product over this when they were so many solutions to the problem.
Edit: Since I started writing this comment others posted the same one - and pointed out it's not really a credit card.
I underestimated the momentum of MS, the power of embarrassment of hiring a high-profile figure to be shown up by a researcher, the incredible anti-FP and even anti-generics...resentment(?) that MS kept towards them. Plus the insane comments from actual developers that literally did not understand the basics of C# ("C is a subset of C#" and "var is dynamic typing" <- comments from a public high-profile MS hire).
I've basically given up hope on programming becoming better over time. A lot of apps are boring grunt work anyway, so the edge in using better tools can be beaten just by throwing a lot of sub-par people at it.
On the plus side, for people looking to strike it rich in "tech", knowing tech isn't really a prerequisite. Persistence and the 'hacker' spirit, even if it means you spend all night writing something in PHP that would literally be one line if you know what you were doing, hey, that's what leads to big exits.
There's two kinds of programs being made. First most common one is the kind you're talking about here: grunt work. It's not about the code, but more about having the code do a straight forward task with very flexible constraints. A web app that talks to a database and does a few simple transformations -- like forum software or a todo app.
The second type is the interesting one: actual difficult code that does something unique and difficult and can take years to write by very experienced developers. Usually the difficult part here is coming up with the correct algorithm and then applying it, often with performance considerations being extremely important as the code is doing a lot of work. An impressive new MMORPG game, a new rendering technique, complex simulations, control code for rockets or advanced batteries.
The big program with functional programming is that it's never been positioned as a solution to the second type. Generally people trying the second type are told to use C, C++ or recently, Rust. Functional programming is marketed as making the first type "better" because trying to market it as being more efficient than C/C++ has not worked as people rely on micro benchmarks for these decisions. But this falls apart with the argument you gave: for the first type, it's better to just hire more junior programmers as there's nothing really difficult involved. And using a functional language makes it extremely difficult to hire junior programmers, because functional languages are aimed at advanced programmers.
It's a massive product-market fit problem.
The examples you gave of difficult problems are mostly soft realtime, which not everything needs to be.
Granted this is a relatively small subset of problems.
(If the whole .NET Core thing gets sorted out.)
DVD-audio. Great multi channel, high rate, high resolution audio. But it required the whole production chain to upgrade as well as new consumer equipement, and they competed with sony own format ( which was also a failure)... But it would have given the record industry a few more years of revenue before the bandwidth would have been sufficient to download or stream music with this resolution.
Well I blame the system. You needed upgraded headphones or speakers. I have $2000 studio monitors when I was an audio engineer and well I can 100% tell you in a blind test I can tell the difference.
95% of the reason why is people don't care about quality audio. It's something about human brains. MP3 sound horrible compared to a FLAC and good equipment. Instead people wear Beats Bluetooth Headphones listening to streamed Audio.
I also blame myself. I use $15 in ear bluetooth ear piece (Looks like a hearing aid) Inovate G10. I listen almost exclusively to podcasts or YouTube videos. The convenience is so much more needed than sound quality. I listen to music at home (When my kids aren't around because all they do is complain). I need to get a good pair of headphones.
I was really excited when I got my Lumia 920 [0]: The OS was just so solid, sleek, and fast. Too bad Microsoft didn't put adequate effort into it. It's as if they couldn't care less about this OS.
[0]: Not to mention the great, solid design of the hardware by Nokia.
It nearly happened too - they got down to talking price[1]
[1] https://en.wikipedia.org/wiki/BeOS#History
Given how cheap even powerful general purpose computers are I don't think they ever will be prized beyond availability. Aficionados, internet stores, arm, Linux, market forces, etc.
To some part, good riddance. Software engineers and product designers are the reason the PC is effectively broken by design in UX. Because they are inexcusably slow. There is 0 reason anything I want to do with e.g. office would not have 0 wait time except non-user centric design. My friggin circa 1984 Mac Plus felt faster on most stuff than the generic desktop PC.
The decade when Moores law gave freebies to software is probably one of the reasons for this astronomical sluggishness. Application feels slow? Well, just wait a year and get a new faster CPU. No need to spend resources on optimization.
I don't know who started with the idea that it's smarter to buy more hardware than to spend resources on programming but if I had to guess, it were the mainframe vendors. Cool for embarrassingly parallellizible batch processing jobs for massive bureaucracies, not so much for desktops.
And don't get me started on the idiocy of thinking it's fine software developers don't need to develop domain understanding in the field they operate.
With the general purpose computer desktop development, we have a system that's broken on so, so many levels.
At least Os X seems to try to do the right thing at an attempt at fluency and I can get a linux desktop to operate smoothly. Even windows 10 starts fast but O brother and sister of the clunkyness of software.
Despite what I said, I think the situation is improving. Immediacy in handheld devices puts pressure on the desktop to concentrate on not wasting the users time as well.
Let's see how it goes.
See how fast it felt.
https://youtu.be/XwbrCYJcrKQ?t=4m34s
No way and NO HOW! That statement is 100% false. The issue is you remember that it felt fast. My Amiga felt like lightening to me in 1985 and I still boot it up from time to time. The delays in everything is over the top and Mac 128k were horribly slow.
You do have some merit to the argument, but your 1984 Mac didn't have half as "pretty" UI, and that matters to some people. It didn't do dynamic language processing to figure out if you've made a grammatical error. It did not have to continuously scan every data stream for malware. It did not have to use a significantly restricted environment to limit damage when some malware did get through. It didn't even support multitasking. My point is that computers today do so much more, but responsive UI is entirely possible for desktop applications, but the development cost is more than most companies/people are willing to pay.
I'd like to see a word 1.0 coded with current tech stacks. Same functionalities. And see how fast it feels.
I think this feeling is a combination of a lot of things taken for granted and nostalgia.
It was the CFO's idea. Software is expensive, and hardware is _incredibly cheap_.
Moreover, if you're in the business of selling hardware, be it PCs, mobile phones, white goods, whatever, it is in your power to throw cheap hardware at the problem, because software development will always be one of your highest costs. You will never be able to just drop readymade code into your new hardware design. And you can more easily recoup the cost of hardware development because you can mark up the price of anything with a faster CPU or more memory accordingly; you can't so easily charge for software features.
Just think, Chrome OS has essentially failed to take over the OS market, so I doubt anything using this overly-sandboxed model will be successful in the future.
But that's the thing, there's no "os market" for appliances. ChromeOS won't 'take over' computing devices any more than toasters will take over your kitchen.
Sandboxing applications and ending the free-for-all over the user’s entire file system were a long time coming. Then spammy software download websites and “Download Now” banner ads bigger than the official download buttons for popular software made it obvious that we needed a trustworthy distribution center. Of course someone starting an OS in the mid 2000s wasn’t going to adopt the Windows shareware distribution model.
The Linux distribution model of a vetted, quality controlled repository of software from a trusted middleman was so obviously superior, of course it won. I guess it’s surprising that iOS got so far with no escape hatch, but Android is the far more popular platform and has always let more technical users peek over the walls if they so desire.
I’m more surprised that so many people are content to type on glass. I was sure that the blackberry keyboard was the future.
I agree that mainstream desktop OSes have failed to give us this yet. Fuchsia sounds like an opportunity to do better.
My old Palm TX did the same thing, showing the graffiti area when a text input had focus and otherwise giving applications the use of all but a status bar's worth of glorious 320x480 screen space. Granted that Palm OS did less to help applications really leverage the extra space than iOS does. But most of what I did on that device was reading anyway.
Of course, Graffiti via stylus suffers less from a soft UI than keyboard input via touch, so it's not quite apples to apples. Still, though, as annoying as a soft keyboard so frequently can be, I wouldn't give up half my phone's display for a hard one.
The one comfort is that someone still needs to be creating all that content and all those communications channels, and those people are always going to benefit from something much more capable than a small, lock-down touchscreen+WiFi device.
This user is a strawman. The problem we didn't solve was decoupling ourselves from corporations controlling our user experience, because we didn't create a good enough alternative for getting stuff done.
Maybe. Let's see what the next generation of visual programming and IDE UX bring.
As computers become "locked app collections" they become appliances, appliances business and government institutions can control and regulate. "Can you make the iPad just not do that thing?"
The web is a big success story, but it's not all there is in computing.
[0] https://postmarketos.org/
[1] https://wiki.debian.org/Mobile
Deleted Comment
Dead Comment