Readit News logoReadit News
jwngr · 8 years ago
Creator here. Six Degrees of Wikipedia is a side project I've been sporadically hacking on over the past few years. It was an interesting technical challenge and it's fun to play with the end result. Here's the tech stack:

  * Frontend: React (Create React App)
  * Backend: Python Flask
  * Database: SQLite
  * Web (frontend) hosting: Firebase Hosting
  * Server (backend) hosting: Google Compute Engine (it runs fine on a tiny f1-micro instance)
All the code is open source[1] and I'm happy to answer any questions about building or maintaining it!

[1] https://github.com/jwngr/sdow

thrownaway954 · 8 years ago
make it so that if I visit the page and just click the "go" button, it will use the placeholder examples as the start and end points. i did this and got an error message stating "You'll probably want to choose the start and end pages before you hit that." that was annoying. the placeholders that were auto chosen were actually really interesting.
ng-user · 8 years ago
'Please' and 'thank you' go a long when requesting additional features for an OSS project.
jwngr · 8 years ago
This has been fixed[1] and should behave in a more intuitive way now. Thanks for the suggestion!

[1] https://github.com/jwngr/sdow/commit/6e42e06488a592784e5d3d2...

anonytrary · 8 years ago
I second this, I did the exact same thing and it made me a bit mad -- why include interesting placeholders if you can't just "go" with them. So, in its current state, it might be a bit counterintuitive and could have a better UX.
justinlilly · 8 years ago
I've poked at trying to build this multiple times for the last 10 years, but always ended up looking at the wikipedia XML and just balking.

The one difference that mine theoretically would have had is that I think it would be rad to include the paragraph you found the link along the edge in the graph, so you could see a little story.

Also, excluding years and places to make the routes more "interesting" (e.g. longest path under a limit without cycles).

examancer · 8 years ago
The UX design is very well executed. This feels so polished. The floating graphs in the background, the individual paths under the chart, etc. It's all very well done with lots of little flourishes. Makes me want to up my game. Thanks for sharing.
Trufa · 8 years ago
Am I not understanding something? When I linked Obama to Uruguay, Myanmar came up (the only surprising one).

https://www.sixdegreesofwikipedia.com/?source=Uruguay&target...

I went to the Uruguayan wiki page and found nothing on Myanmar and nothing relating to Uruguay on the Myanmar page.

Awesome project btw, great idea.

jwngr · 8 years ago
[copy of answer from below] The Wikipedia database doesn't differentiate links which appear in the main article versus in the sources or categories sections. It's possible one of the intermediate links is in there. You sometimes need to do a CTRL+f in "View Source" to find the link. Also, the latest Wikipedia dump is from February 2nd, so it's possible the link has been deleted since that date. I'll regenerate my database when the new dump lands in early March.
spiznnx · 8 years ago
It's a directed graph. Your source is Uruguay and your destination is Obama. You'll find Obama on the midpoints.

Opposite direction: no Myanmar https://www.sixdegreesofwikipedia.com/?source=Uruguay&target...

colemannugent · 8 years ago
Your notification for having JS disabled made me chuckle.

One suggestion I have is that the graph view seems to clutter up pretty fast. Maybe have a slider that increases the length of the lines between the vertices. Also, a SVG export would be cool for visualizing related concepts.

jwngr · 8 years ago
Glad you found one of the Easter eggs :D

The graph visualization / performance is definitely not ideal. I spent a ton of time trying to make d3 more performant and layout the graph more nicely, but ultimately I just had to cut my losses and go with what I had. I do think there is room for improvement and I'll look into your suggestion, which is something I didn't consider. SVG export is also a great idea!

ysaimanojkumar · 8 years ago
Hi jwngr,

Thanks for the cool hack. Its nice.

How about representing the destination page as a circle around the whole graph, instead of a node? so that all the paths can be drawn in different directions but still reaching the same page. I somehow feel that it might look more beautiful.

jwngr · 8 years ago
Ooh cool idea! That certainly would improve the information density issue. I honestly never considered that at all and have no idea how I'd do it in d3, but I may try to hack it out. Thanks!
sinaa · 8 years ago
Great work!

Do you simply do a BFS to find the shortest paths? If so, are you doing any tricks to avoid the path explosion problem?

jwngr · 8 years ago
Thanks! I'm glad you asked. I actually do what I call a bi-directional breadth first search[1]. The gist of it is that instead of just doing a BFS from the source node until I reach the target node, I do a reverse BFS from the target node as well and wait until the two searches overlap. That helps with the exploding path problem, although that still becomes an issue for longer paths (>= 5 degrees generally). I also pre-compute all the incoming and outgoing links for each page when I create the database[2] so I don't need to do that upon every search, which resulted in a huge performance boost.

[1] https://github.com/jwngr/sdow/blob/a2699dc95d884ec64a4641630... [2] https://github.com/jwngr/sdow/blob/a2699dc95d884ec64a4641630...

ryan_j_naughton · 8 years ago
It is bidirectional BFS: https://github.com/jwngr/sdow/blob/master/sdow/breadth_first...

A* can't be used given that path cost or expected remaining distance is unknown.

Any ideas on how such an algorithm could be used without precomputing the entire graph?

Teichopsia · 8 years ago
Apparently an idiot here. What is the difference between web hosting and server hosting?

The way my mind interprets that stack is the database is hosted on firebase while the page is hosted on the server?

Edit: Thank you all for the explanation. I used to think firebase was used as a database. I didn't know one could host front end files there. It seems I still have a long ways to go :)

jwngr · 8 years ago
No, not an idiot. I didn't use the best terms. I updated them to say "Web (frontend) hosting" which are my static files (the HTML, JS, CSS) which is deployed to Firebase Hosting and "Server (backend) hosting" which is my backend Python Flask web server which is deployed to Google Compute Engine (GCE).

So, the website files are hosted on Firebase while the backend is hosted on GCE. The database is actually not hosted; it's just a SQLite file stored on my GCE instance.

servercobra · 8 years ago
Firebase Hosting is a simple way to host frontend code, similar to (but a little easier than) using S3 to serve the frontend. Since the database is SQLite, it seems like the backend and DB are hosted on GCE.
ehsankia · 8 years ago
I assumed the other way around. Firebase gives you the webpage, but when you send a query, it is sent to Compute Engine to calculate all the paths and sent back to the frontend to render.
unreal37 · 8 years ago
Some years ago, I owned sixipedia.com. I never knew what to do with it.... If I still had it, I would have given it to you.
SkylerASmith · 8 years ago
This is an awesome tool!

I'm interested in building a fact checker from a wikipedia graph, and your SDOW seems like a great place to start (I'm intending to use an algorithm inspired from researchers at Indiana University http://journals.plos.org/plosone/article?id=10.1371/journal....). I was wondering if your database has a non-GUI API. Is there a URL or something I can hit to get back JSON or XML as a response?

jwngr · 8 years ago
I'd prefer you not send any additional load to my server (this is just a side project I'm paying out of pocket for), but you are welcome to download the data yourself. There are instructions in the project README[1] to download the SQLite files I use in the project and I should have documented enough about the schema for you to know what queries to make. I am happy to answer questions via GitHub issues if you have them.

[1] https://github.com/jwngr/sdow#get-the-data-yourself

reificator · 8 years ago
Could you please add a mode that does the opposite?

I've always enjoyed playing 6 degrees myself, so if it gives a link to the first page and names the second page, then only shows the available routes when I'm done, that would be a lot of fun.

I have a couple of "hub" articles that I like to use, but I'd like to see how much more effective I could have been with a tool like this. And if it randomizes my start and end like the placeholder text shows, that makes it even easier.

mintplant · 8 years ago
cvigoe · 8 years ago
Love the idea, and it’s brilliantly executed! Well done.

Perhaps I misinterpreted the concept of “degrees of separation”, but I was expecting the site to tell me how to start at page X and get to page Y with the min number of clicks. If you wanted to achieve this, it doesn’t strike me as appropriate to use Bidirectional BFS but IANAL.

I did notice that someone pointed out that they get different results by swapping the order of X and Y. This seems pretty surprising?

Well done again!

jwngr · 8 years ago
Thanks, glad you enjoyed it!

> I was expecting the site to tell me how to start at page X and get to page Y with the min number of clicks.

Yup, this is exactly what the site does, and a bi-directional BFS is an efficient way to do it. The special thing about my bi-directional BFS is that I follow outgoing links when searching from the source page while following incoming links when searching from the target page[1].

> I did notice that someone pointed out that they get different results by swapping the order of X and Y. This seems pretty surprising?

This is expected, because it is a directed graph, with the links on Wikipedia being in one direction. Just because page A links to page B doesn't mean page B links to page A.

[1] https://github.com/jwngr/sdow/blob/master/sdow/breadth_first...

vanderZwan · 8 years ago
I had a conversation with a friend a few weeks ago that surely this already exists, and if not that someone should make this.

Any plans to filter by mutual paths?

jwngr · 8 years ago
I'm definitely not the first to think of it or build a tool for it (lots of similar projects gave me inspiration), but I think I'm the first to make it really fast and with a nice usable UI. And to actually open source the code so others can build it themselves.

Can you tell me more about what you mean by filtering by mutual paths?

bluetwo · 8 years ago
I would like to hear a little more on how you organized the search and what you are pre-processing and what you calculate on-the-fly. Thanks.
jwngr · 8 years ago
The database creation script[1] has a lot of Unix junk in it, but reading through the comments and echo statements should give you an idea of what it does. The end result is a SQLite database with a size of about 9 GB which has four tables, the schema of which are described in the README[2]. The big things that are precomputed are redirects are "auto-followed" to reduce the total graph size and all incoming and outgoing links are stored in a |-separated string for each page (in the `links` table).

Every time a query is made, a bi-directional breadth-first search[3] is run which uses the |-separated incoming and outgoing links and runs a fairly standard BFS algorithm. A lot of the hard work was precomputed, which minimizes the number of required database queries and makes each search respond fairly quickly.

[1] https://github.com/jwngr/sdow/blob/master/database/buildData... [2] https://github.com/jwngr/sdow#database-creation-process [3] https://github.com/jwngr/sdow/blob/master/sdow/breadth_first...

larkeith · 8 years ago
I've not yet had a chance to look over the code (in case it's already there or infeasible due to architecture), but you may wish to consider caching prior queries and their results - this seems like the type of service that would be likely to have certain paths shared widely, such as the first few top-level comments on this post.
jwngr · 8 years ago
I'm not sure caching would help a ton given how I structure the data and do my searches in batches of pages, not for individual pages. I already do some "caching" by precomputing all incoming and outgoing links for each page when I create the database, which, as you would expect, yields a huge performance improvement. A cache certainly would help, but I would expect the hit rate on it to be extremely low, making it not worth the effort. I may have a different opinion after analyzing some of today's results though. Thanks for the suggestion!
sente · 8 years ago
Thus is awesome.

Just a heads up - some of the node colors can be difficult to differentiate for people who are red/green colorblind. Very minor, just wanted to mention it though.

techaddict009 · 8 years ago
Looks pretty cool.

Simple question I have is, are you hitting wikipedia api live? or you have dump of the wikipedia and running through it?

If running through dump do you update it regularly or how?

Thanks in advance.

jwngr · 8 years ago
The autocomplete suggestions hit the live Wikipedia API[1]. The actual search algorithm is on a dump of Wikipedia[2], which I plan to update monthly.

[1] https://github.com/jwngr/sdow/blob/f39398d112fecf7b993c64bd4... [2] https://github.com/jwngr/sdow#data-source

whateveruser · 8 years ago
Hey man, just a nitpick. The input fields fudge up when using dark GTK themes, as in text isn't legible unless I select it. Might wanna look into it.
jnbiche · 8 years ago
Why not use a graph database like Neo4j instead of SQLite? This seems like the perfect use case.

Is it because of the resources required to run one versus SQLite?

jwngr · 8 years ago
I actually had a friend suggest it to me and the Neo4j docs happen to be one of the many tabs I currently have open. I was already so far into using SQLite for this project and I wanted to ship it, so I decided to stick with what I had. I would be interested to see how Neo4j performs with such a big dataset (the resulting SQLite file is around 9 GB with nearly 6 million nodes and 500 billion links). I was a bit worried that Neo4j wouldn't be able to scale to a graph of that size, but that is a completely untested and ignorant opinion. If you have any experience with Neo4j, I'd love to hear your thoughts.
johnhenry · 8 years ago
Great project! I wonder if you might be willing to go into more details about what made it an "interesting technical challenge"?
jwngr · 8 years ago
The sheer scale of Wikipedia (5 millions pages, half a trillion links) made it difficult to make the searches fast. Simply downloading the Wikipedia database dumps and parsing them into my own database took over a day on my first successful attempt. The site returns most results in just a few seconds despite the giant graph size.
sidcool · 8 years ago
This is pretty cool. It would be great to write a blog around your technical decision making for this project. Thanks!
orf · 8 years ago
Damn, you beat me to it! I've been hacking on something similar for the longest time. Thanks for sharing your code!
VikingIV · 8 years ago
Were either of your creations around since ~2012? I recall someone sharing this very concept in a room on turntable.fm, except it would list it's discovery in real-time as an ordered list. I've been racking my memory for 2 years trying to find a link again, but here we are!
nathanken · 8 years ago
Really cool. I'm interested to know how the graph is build. Did you use any third party components.
jwngr · 8 years ago
Thank you! The graph is built using vanilla d3, no library on top of it. The code for it lives all in one file, ResultsGraph.js [1]. I pieced together the code from a handful of other attempts online. I am still not 100% pleased with the performance of it with a larger number of nodes (250+), but that seems to be a common complaint with the d3 force simulation layouts.

[1] https://github.com/jwngr/sdow/blob/master/website/src/compon...

zapt02 · 8 years ago
How big does the SQLite database get? How do you maintain such great performance? (Other than indexes?)
jwngr · 8 years ago
The resulting SQLite database file is currently 8.3 GB, most of which is taken up by the `links` table. The big performance wins are having a handful of indexes (see the .sql files[1] for the database's schema) and preprocessing a lot of data so I don't have to do duplicate work every time a query occurs. For example, instead of the `links` table going from `source_id` to `target_id` and having a ton of rows which have the same `source_id`, I go from `id` to `outgoing_links` (which is a |-separate string of all source page IDs). Computing each page's incoming and outgoing links is the really heavy work and I only do that at database creation time, using a beefy GCP machine with 8 vCPUs, 52 GB RAM, and a 256 GB SSD. It still takes about an hour, but it's a one time cost and means I can run the actual service on a much smaller machine which won't cost me a fortune to maintain. Also, SQLite is just very fast and performant out of the box, so as usual, it's a matter of choosing the right tools for the job.

[1] https://github.com/jwngr/sdow/tree/master/database

drej · 8 years ago
Looks pretty cool - do you have any plans to support subsites other than enwiki?
jwngr · 8 years ago
Possibly... follow this GitHub issue[1] if you want to be notified about it.

[1] https://github.com/jwngr/sdow/issues/11

turc1656 · 8 years ago
Not sure if you deliberately designed it this way, but I noticed when spot checking some results that it includes the bibliography section links as connections. This seems like it may not be desirable. Example, I did a search that went from the Crusades to Buzz Aldrin and I noticed that Routledge was the first hop from the Crusades. It strikes me as odd that Routledge (a publishing company) would be mentioned on the Wiki article for the Crusades. So I went to look and noticed the link it took was the citation for a book published by this company. I wouldn't really count that as a legitimate hop since it's a citation, not a content link.

EDIT - I noticed this also applies to the Notes section.

trishume · 8 years ago
A few years ago I made something similar (http://ratewith.science/) that only uses bi-directional links, that is pages that both link to each other, and this gives much more interesting results.

When two Wikipedia pages both link to each other they are usually related in some reasonable way, but unidirectional links give you things like Wikipedia -> California, which only exists because Wikipedia is headquartered in California, a pretty weak connection.

Other than the fact I have it running on an overburdened tiny VPS, my app is also really fast even though I only do a unidirectional BFS because I use a custom in-memory binary format that's mmapped directly from a file that's only 700MB, and a tight search loop written in D.

cyphar · 8 years ago
Maybe I made a mistake somewhere, but I'm not sure it uses bi-directional links. For a really odd search like GoldenEye -> Abbotsford [1] it uses multiple "special" Wikipedia links that I'm pretty sure wouldn't be bidirectional.

[1]: http://ratewith.science/#start=Goldeneye&stop=abbotsford

mulmen · 8 years ago
On a scale of The Monty Hall Problem to Manifest Destiny this is a 5!
amelius · 8 years ago
Perhaps you could refine that by also allowing double backward hops, triple backward hops etc.
dingo_bat · 8 years ago
Pretty cool!

Dead Comment

jwngr · 8 years ago
Yeah unfortunately I don't know of any way to differentiate the different types of links. Wikipedia's pagelinks database doesn't different them. I agree it's undesireable but I just cannot figure out how to cull them.
turc1656 · 8 years ago
I'm not sure how the backend is structured, but it seems that you must parse the individual pages at some point or another. I took a quick look at the Wikipedia HTML for a few pages and I would suggest stripping out anything within (or nested inside) of classes like "mw-cite-backlink", "reference-text", "citation book", "citation journal", etc. Also, you can probably strip out anything inside of a <cite> HTML tag.

I'm sure there are more classes and tags, but that hopefully should give you a solid place to start.

EDIT - You can also strip out or ignore anything inside of the ordered list for references - <ol class="references">...</ol>

Also, some pages aren't documented in the same way, so something like this page - https://en.wikipedia.org/wiki/X_Window_System - doesn't have any classes or easy way to parse it for the References section even though the Notes section was set up in a more organized way. However, you could take note that the <span> tag contains class="mw-headline" id="References" and the text value is also References and then ignore everything until the next <span> begins.

incompatible · 8 years ago
If it was technically possible, it would probably also be worth culling anything linked within a template. Like disambiguation headers at the top of a page, or semi-related lists grouped in blocks at the bottom (things like "Cities in Australia").
posterboy · 8 years ago
That might be a problem of wikipedia itself. This project can hardly fix that. I mean, That link has hardly any business of being there; "Oxford University Press" isn't linked either, though the page exists, and why would it?
pradn · 8 years ago
I think you'd have to parse the raw wikitext dump, keeping track of which section each link belongs in. Since the raw dumps is something like 50 GB of text, this sort of thing takes a while.

Deleted Comment

labster · 8 years ago
Anime --> Obesity

https://www.sixdegreesofwikipedia.com/?source=Anime&target=O...

Somehow, I didn't expect a one-stop layover in Dubai.

52-6F-62 · 8 years ago
I went and made it political: Anime -> Alt-right.

The result was a little more predictable... I guess I shouldn't have had to look it up.

https://www.sixdegreesofwikipedia.com/?source=Anime&target=A...

riking · 8 years ago
But going Alt-right -> Anime has a second path through Vaporwave.
make3 · 8 years ago
That's hilarious. We should make this a contest, the 2 length paths with the most unlikely layover
tucif · 8 years ago
I was getting interesting results until I noticed a pattern involving the presence of "Wayback machine" as the only connecting dot between really different things.

That adds noise, since articles now automatically use the wayback machine for "archived" links, thus generating many paths that do not really connect topics, just because the text "wayback machine" is part of the link text.

It may be an interesting exercise to find outliers like that and compute paths without those nodes.

jwngr · 8 years ago
I considered this and may eventually add an option to ignore those kinds of pages, but I ultimately felt like the current mode remains more true to my goal for the project which is to traverse the links as any human would be able to. By the way, the two pages with the most incoming links are "Geographic coordinate system" (1,047,096 incoming links) and "International Standard Book Number" (955,957 incoming links).
nayuki · 8 years ago
Indeed, the Wikipedia pages "Geographic coordinate system" and "International Standard Book Number" have the highest PageRank. See: https://www.nayuki.io/page/computing-wikipedias-internal-pag...
purell_hack · 8 years ago
After spending way too much time I got 9 degrees of separation. I did piggyback off of someone else's work with "Phinney". https://www.sixdegreesofwikipedia.com/?source=Lion%20Express...

I found a ton of 5 degree paths and only a couple of 6 degree ones. Then I pulled out the "big guns" (the dead-end pages category). https://en.wikipedia.org/wiki/Category:Dead-end_pages_from_F...

johan_larson · 8 years ago
That's impressive.

Wikipedia is highly connected. Even pretty darn different things often seem to have only three degrees of separation. For example, here's Ramesses II to Ankylosaurus in three:

https://www.sixdegreesofwikipedia.com/?source=Ramesses%20II&...

After trying a bunch of things, I finally found a fourth-degree separation: William the Conqueror to Ankylosaurus.

https://www.sixdegreesofwikipedia.com/?source=William%20the%...

plaguuuuuu · 8 years ago
I don't know if this is legit. The Lion Express node only connects to a couple of Wikipedia's generic help pages, which I'm pretty sure don't link back to Lion Express.

https://en.wikipedia.org/wiki/Help:Linkhttps://en.wikipedia.org/wiki/Help:Searching

purell_hack · 8 years ago
Here's one with 7 hops and not using The Lion Express. https://www.sixdegreesofwikipedia.com/?source=Zevenhoven&tar...

Like I said I spent too much time on this yesterday.

pacuna · 8 years ago
Damn, I had trouble finding more than 3 degrees
TremendousJudge · 8 years ago
yeah I just cheated by using orphaned articles and managed to 'win'
colemannugent · 8 years ago
This makes the "How many clicks to Hitler" game much faster.

For those uninitiated, the game was to click the "Random Article" link in the sidebar and count how many links it took to get to Hitler. It is really interesting to see just how big of an event WWII was. Every country article has a section on their involvement or why they were not involved.

After playing with it more, this is pretty fun. I vote that a "degrees from Hitler" score be added to the top of every article. I think it might be an interesting proxy for how esoteric a particular page is.

cortesoft · 8 years ago
This reminds me of the wikipedia rule I learned a while back: If you click the first link in an article (besides the pronunciation guide), you will always end up on philosophy.
arglebarnacle · 8 years ago
According to the wikipedia page about this rule/game, over 97% of pages have this property. Interestingly, "Mathematics" seems to be a rare exception. You get stuck in a loop:

Mathematics -> Quantity -> Multitude -> Counting -> Elements -> Mathematics

jffry · 8 years ago
There's even a great page with a small graph and a rundown of some more resources: https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosoph...
aidenn0 · 8 years ago
Finding ones that don't go through "science" is interesting.
freefal · 8 years ago
Woah. This works. Neat.
bringtheaction · 8 years ago
Indeed. First thing I tried as well.

Found 1 path with 2 degrees of separation from Bitcoin to Adolf Hitler in 1.33 seconds!

Bitcoin -> Austria -> Adolf Hitler

AstralStorm · 8 years ago
Game is much more challenging if you ban pages that also have big categories.

(Look of Category: X exists and add some cutoff based on number of links there.)

Obviously also lists.

2T1Qka0rEiPr · 8 years ago
My colleague saw me looking at this site, and I went through exactly the same thought process as you! His feeble attempts at non-Hitler linked articles failed miserably.
swalsh · 8 years ago
Hilarious, my first search was "OFDM -> Hitler" for some completely unknown reason. (2 steps for the curious)

Deleted Comment

fortythirteen · 8 years ago
Age of Enlightenment -> Consumption of Tide Pods[0]

[0] https://www.sixdegreesofwikipedia.com/?source=Age%20of%20Enl...

vanderZwan · 8 years ago
Interestingly, if you go the other way it blows up:

[0] https://www.sixdegreesofwikipedia.com/?source=Consumption%20...

saagarjha · 8 years ago
I'd assume very few articles lead in to "Consumption of Tide Pods", while many do to "Age of Enlightenment".
QML · 8 years ago
Kind of reminds me of PageRank in someway; the number of "backedges" is not guaranteed to equal number of "forwardedges", hence the difference in number of paths.
jacobwilliamroy · 8 years ago
Mensa International -> Consumption of Tide Pods:

https://www.sixdegreesofwikipedia.com/?source=Mensa%20Intern...

dingo_bat · 8 years ago
Meg Whitman -> Consumption of Tide Pods

https://www.sixdegreesofwikipedia.com/?source=Meg%20Whitman&...

ಠ_ಠ

hsrada · 8 years ago
It shows 8 3-click paths for me. https://imgur.com/a/uo9oi
fapjacks · 8 years ago
Actually, interestingly, I've been introducing this concept as a "party game" with other nerds at RL gatherings for some years now. The goal is to start on a random page and find the shortest path to another random page by only clicking links in the articles. It can be quite a lot of fun, despite what you're thinking! And anybody can understand the challenge and compete and have fun. It's not just something for geeks.
wldcordeiro · 8 years ago
I had run into it with specifically how many links to get to the page for Hitler.
jnbiche · 8 years ago
What is an RL gathering?
chaseha · 8 years ago
RL == "Real Life"
sincerely · 8 years ago
likely stands for "real life"
posterboy · 8 years ago
Real Live