Readit News logoReadit News
Posted by u/AussieWog93 4 years ago
Poll: Did you know that HN allows you to make polls?
It isn't obvious, but it seems like you can!

The link to do so is here:

https://news.ycombinator.com/newpoll

Polls are not supported
grumblepeet · 4 years ago
I did not know that you could make polls on HN and I’m also 5 karma points short of being able to make them. Besides that, I don’t think that I would make polls here, I value the good story links posted and the insightful replies, and often when I see polls in other places it just seems to ignite flame wars and low value discussions. It’s a good thing that it is obscure, long may it be rarely used!
readingnews · 4 years ago
omg, I did not even know we have "karma" here (never looked at my "profile") I actually read HN to get away from stuff like that. Funny, perhaps people come to HN as most people here do not pay attention to those items.
telesilla · 4 years ago
Karma on this site is more of a means to manage trolls, by way of leveraging the community's response to individual's posts that may over time show a pattern. Karma doesn't mean much here in terms of visibility - over time your best-voted comment might be at the bottom of a long thread. If you get enough, you can downvote but I'm wary of doing that outside of obvious trolling or maliciousness.
wongarsu · 4 years ago
HN tries to curb the karma addiction by not showing the karma of comments (other than your own), while still keeping it as a proxy for reputation (both in the profile and to gate certain features, most notably the ability to downvote).

Not being able to easily compare yourself to others certainly helps, even if it isn't perfect.

josefrichter · 4 years ago
You though you escaped, but karma eventually got you.
serial_dev · 4 years ago
I'm glad you now got the needed karma to create polls :)
Fire-Dragon-DoL · 4 years ago
How much karma is needed?
dredmorbius · 4 years ago
"How accurate are online polls?"

[M]ost online polls that use participants who volunteer to take part do not have a proven record of accuracy. There are at least two reasons for this. One is that not everyone in the U.S. uses the internet, and those who do not are demographically different from the rest of the public. Another reason is that people who volunteer for polls may be different from other people in ways that could make the poll unrepresentative. At worst, online polls can be seriously biased if people who hold a particular point of view are more motivated to participate than those with a different point of view.

https://www.pewresearch.org/2010/12/29/how-accurate-are-onli...

Rather than serve as accurate assessments of public opinion or beliefs, online polls at best surface sentiments and potential areas of interest. Some are mere amusements. Many serve a darker purpose of advocating for a specific cause or ideology (a "push poll" https://en.wikipedia.org/wiki/Push_poll), fishing for either insights on how audiences might be manipulated through advertising or propaganda, or outright soliciting personal information that is of use in hacking into accounts by guessing passwords or so-called "security questions".

nopenopenopeno · 4 years ago
I guess we’ll just have to accept that we may never know how many people were previously aware that they could create polls on HN.
Nbox9 · 4 years ago
It’s not just that online polls do not provide valid data, it’s that they provide misleading data.

Deleted Comment

Semaphor · 4 years ago
This is always very apparent on /r/SampleSize (a subreddit dedicated to posting and taking surveys): While Reddit as a whole has far fewer women than men, women often outsize men, Germans are usually the 3rd biggest group after USA-ians and British, and transgender sometimes make up 15%. Luckily, most people there (judging by the comments) are aware of that.
kwkelly · 4 years ago
Most serious online pollsters are using random sampling through panel providers and stratify based on known demographic statistics. Samples are re-weighted by raking (which reduces bias but increases variance) to better match population statistics.

This of course doesn’t mean all sources of bias are eliminated, but you also can’t eliminate all sources of bias in phone polling. Just like those taking polls on the internet, certain slices of the population that you can’t control for or don’t know about may be more or less willing to participate.

jodrellblank · 4 years ago
> “Another reason is that people who volunteer for polls may be different from other people in ways that could make the poll unrepresentative.

See also “most of what you read on the Internet is written by insane people” on r/Slatestarcodex - https://www.reddit.com/r/slatestarcodex/comments/9rvroo/most...

shusaku · 4 years ago
It’s definitely worth remembering that just as polls can be misleading about sentiment, so can the voting of comments.
mike741 · 4 years ago
These criticisms are not limited to just online polls. One advantage that online polls have is a larger sample size. Gallop polls, for example, tend to have an extremely small sample size of less than 2,000 people while an online poll can reach hundreds of thousands. Both online and offline have advantages and disadvantages. At the end of the day, if you're being shown a poll that you didn't pay for or participate in then it's likely intended to influence rather than educate.
kqr · 4 years ago
Systematic bias in participants would be a far bigger worry to me than sample error from few participants.

I'd take a properly randomised poll with 200 participants over a non-random one with two million any day of the week. At least if participation is truly randomised you can calculate the maximum possible extent of the error. With bias, who knows?

dredmorbius · 4 years ago
A larger random sample reduces error by the square root of the sample.

If the sampling is biased, however, all bets are off.

A 100,000 observation poll has 100 times the costs of a 1,000 poll (data must be collected from 100,000 times more samples), but offers only 10x greater accuracy.

You'll find this mathematically in the definition of standard deviation, which divides by the square root of the sample size: sqrt(1000) ~= 31.6, sqrt(100000) ~= 316.

Larger random samples are useful where you're exploring many variables, or very small portions of the population. They afford greater precision. The accuracy however is dictated by the randomness.

I had a strong lesson in this a few years back when the question of active user participation in Google+ came up. I'd had one too many hand-wavey assertions that the site was far more active than was generally claimed in the press. I'd realised that G+ had a set of sitemaps files, and included in those were sitemaps of individual Google+ user profiles. Present on the web results for that URL was an indication of whether the profile had never posted publicly at all, or, if it had, what the most recent publicly-posted content was.

Each sitemap file had roughly 50,000 entries. There were something on the order of 40--50,000 profile sitemap files, about 25 GB in all.

I took a gamble and made the assumption (later tested and largely validated) that profiles listed within a given sitemap were themselves a random assortment. This checked out on any number of eyeball analysis (creation dates, user names, global regions, and activity status all seemed both random and uniform over a few tested files). So I selected one sitemap file (itself at random) and over the course of a few days using a pretty modest laptop and broadband connection pulled down and web-scraped some 50,000 entries. A simple pattern match told me whether or not the profile was active, and when.

Within the first 100 profiles viewed, the trend was very clear. Only about 9% of profiles seemed to have ever posted any content. That percentage varied between roughly 7--12% initially, but rapidly converged as my dataset grew.

I let the run continue regardless. I resampled my sample subsetting it variously ("Monte Carlo estimation") to see if the values varied (I believe either 60 or 100 record subsamples), and again, the same 7--12% or so was returned for each. Looking at recent activity (within the month during which the analysis was performed), was 0.3%. The analysis also revealed just how much the forced integration of YouTube and G+ had inflated G+ activity numbers (a bit over 1/3 of all most-recent activity).

This generated some blow-up on G+ among members there, and I got called a few things, as happens. Google themselves never formally responded (I did hear from a few Googlers who contested findings or methods.) A few months later, an Internet marketing group, Stone Temple Consulting, re-ran the analysis based on my methodology but on a 10x larger sample (500k profiles), selected from across a much larger set of the sitemaps. They fully confirmed my own headline numbers, though could (thanks to their larger sample) offer more precise insights on smaller groups within the overall population. I had absolutely no participation in the follow-up study, and was unaware of it until it was made public.

Stealth edit/update: Then as now, what annoyed me most about the whole episode was that Google were so obviously dissembling, making up numbers and/or outright lying about Google+ activity, and the press were largely lapping it up, when a very modest investment of time and effort would put paid to the lie. And after I'd done the analysis, armchair warriors continued whinging even as I'd fully documented methodology and tools used, enabling anyone to replicate and confirm or deny findings. Eric Enge of Stone Temple was the only one to do that. I don't generally hold marketers in high regard, but he earned respect from me in doing that.

https://ello.co/dredmorbius/post/naya9wqdemiovuvwvoyquq

https://blogs.perficient.com/2015/04/14/real-numbers-for-the...

(Stone Temples has since been aquired by Perficient.)

tzs · 4 years ago
I wonder what could be done if HN itself were to analyze data from an HN poll, combining it with other data they have on the participants?

Quite a lot about a person on HN can be inferred from that person's comment history. Inferences could also be made from their IP address history if that is logged. Upvote, downvote, flagging, and vouching history if logged would also provide some information.

I wonder if there would be enough to correct for some of the biases?

Deleted Comment

Deleted Comment

jart · 4 years ago
Pew Research is a biased source because CGI Script polling poses an existential threat to their business model.
dredmorbius · 4 years ago
No.

Pew Research (and pretty much any other credible research institution or academic) are domain experts, understand sampling methodology, and the very-well-known biases which occur from biased and/or self-selecting samples.

Public polling can be exceedingly accurate based on surprisingly small samples (roughly 300 in most national political polls, for example, and even that is generous), so long as the sample is in fact truly random. Oh, and that the responses are not similarly filtered.

One of the most famous cases of a poll which failed due to sampling bias was in the 1948 US Presidential election, and resulted in the Chicago Tribune erroneous headline, "Dewey Defeats Truman". This was drilled hard in my own stats education several decades ago and remains a sharp lesson in the risks of biased sampling. For a good descrition of the error, see:

https://web.archive.org/web/20180823071102/https://textbook....

asplake · 4 years ago
Polls brought a plague of dumb clickbaitiness to LinkedIn, unwittingly amplified by commenters complaining of their lack of nuance. Let’s not do that here
jmcgough · 4 years ago
Back around 2012 HN had a massive amount of polls every day on the frontpage. I even wrote a scraper to scrape them all off and write them to a hnpolls website. Glad it's not really a thing these days.
jonathankoren · 4 years ago
Let’s vote on it.
jfoster · 4 years ago
Problems like that don't have to be solved by changing the existence of a feature. There are many ways to address such an issue; adjusting the incentives, changing the ranking algorithm, etc.
seanhunter · 4 years ago
Exactly. If they start getting used at all, I hope the feature gets disabled.
laoganmaplz · 4 years ago
No, but I also sincerely believe that dang is a lizard. Not because I have any reason to believe that dang is a lizard, but because my ethos obligates me to believe anyone is a lizard when I see an opportunity to do so.
gruez · 4 years ago
>but because my ethos obligates me to believe anyone is a lizard when I see an opportunity to do so.

Sounds like you're one of those people

https://slatestarcodex.com/2013/04/12/noisy-poll-results-and...

Dead Comment

Tempest1981 · 4 years ago
I can even vote twice; choices aren't mutually exclusive.
elihu · 4 years ago
Yep, I tried it before reading your comment.

I think that's great. Approval voting is awesome, and probably a better default for a lot of things anyways.

Even mundane things like "where do you want to eat lunch" with first-past-the-post you run the risk of selecting a place most people don't want to eat at because they split their vote between a bunch of options most people preferred. Really, you want people to distinguish between places they want to eat at and places they don't want to eat at, and the optimal thing is to select the one that the most people do want to eat at. That's basically how approval voting works.

dragonwriter · 4 years ago
> Even mundane things like “where do you want to eat lunch” with first-past-the-post you run the risk of selecting a place most people don’t want to eat

If you have open ballots and either approval is a commitment to join or disapproval is a waiver of the right to join, approval is very good method for choosing group activities in a way it is not for most public elections, where there is no coherent meaning to the approval/disapproval divide so it ends up just being a very weird forced reduction in resolution of a preference ballot to two preference ranks. (The lack of coherent meaning between different ballots is a problem that most analysis ignores in basically all voting systems that aren’t either bullet ballots of full forced or unforced preference ballots; of popularly proposed alternative voting methods its a particular issue for those using approval or range/score ballots, or variations on either.)

vaylian · 4 years ago
Reality is complex. HN supports a complex reality.
___q · 4 years ago
...even though the options are; I voted "yes" and "no"
maxrf · 4 years ago
"Sorry, you need over 200 karma to create a poll." :(
javierga · 4 years ago
That’s going to take a while to process for me. Have tried to be an active user since 2014 and whenever I have something of value, it’s already been expressed better by someone else.
hyperman1 · 4 years ago
Marc Twain had a technique to create text of value.

If you hear some news, notice your immediate reaction. Everyone thinks the same thing, so dump that thought in the litter. Same for your second immediate reaction. The third or fourth reaction are maybe different enough to merit writing down.

toastedwedge · 4 years ago
I almost didn't reply to this to say, "This is the same for me as well," because, well, it's the same! Someone already expressed, in better phrasing, what I would otherwise say.

But given the subject matter, I feel a reply is in order.

awild · 4 years ago
I've deleted many comments before submission because I decided I wasn't in the mood for a debate on the topics.
sharkweek · 4 years ago
Get in the habit of submitting interesting articles you read! It takes 15 seconds and adds a lot to the community to take a handful of shots at a submission each month.
usr1106 · 4 years ago
Well, it wasn't so slow for me, but it did feel slow.

What I found most disappointing is that while I am here most for software development topics I got my karma on economic, political, geographic, Europe vs. US etc. topics.

arichard123 · 4 years ago
Me too. Your comment is one such example.
Aeolun · 4 years ago
Even two hundred valueless comments will get you there though.

Question is if you want to make those.

renewiltord · 4 years ago
Don’t worry too much. Go find an Apple thread and post about CSAM or proprietary walled gardens, find any random CVE thread and post about how “heads must roll” or some shit, go find a meta thread and post about how repetitive the comments can be.
addandsubtract · 4 years ago
Easy, just start a "AWS is down again" thread. The chances are 50/50 that you're right :P

/s

simlevesque · 4 years ago
I'm impressed that you got a 5 letters username less than 3 years ago.
63 · 4 years ago
Maybe not as impressive since it's a number, but I got mine in 2021. HN names aren't terribly competitive, it seems. Maybe it's due to the more mature culture here.
icy · 4 years ago
I managed to snag this one by emailing hn@ycombinator.com to transfer it to me as the nick was dormant for years.
AussieWog93 · 4 years ago
For what it's worth, this whole submission came about because another user was shocked that someone had registered the HN username "ford" in 2022.

Turns out another user named "fordprefect" existed and had commented on a poll from 2009, which surprised the heck out of both of us.

shxuw · 4 years ago
I got a 5 letter username less than 3 minutes ago
dheera · 4 years ago
26^5 is 11881376, surely there aren't that many users here yet (?)

Deleted Comment

blowfish721 · 4 years ago
Have an upvote, one less karma to go.
kingcharles · 4 years ago
And you too, sir. Have one yourself. On the house.
axegon_ · 4 years ago
One constructive rant on a controversial topic and you're done.
kayodelycaon · 4 years ago
Eh. I’ve found that a reasoned argument works better than a rant.

When I send a story out for beta reading, I go through and fix all the major spelling and grammar mistakes first. I shouldn’t need to, but people get hung up on the spelling and don’t see the story.

It’s the same with rants. People see the anger and frustration and miss the argument being presented.

You can argue people should look past the emotion and consider the logic. I’d agree. And I do my best to do that myself. But the reality is many people won’t. I choose to be pragmatic and rewrite any rant more carefully so I’m heard. Or I delete it because it doesn’t add meaningfully to the conservation.

Also, some controversial topics are the result of irreconcilable beliefs. See any discussion of Apple. In those threads, people talk about things they value and other people don’t value those things. There are arguments and rants that don’t go anywhere as a result.

As a result of this observation, I approach communication deliberately. If I think a rant will be heard, I’ll let myself rant. If I don’t, I take time to think why I disagree and figure out how to phrase it in less aggressive language.

There are people who really dislike this approach for a variety of reasons. I enjoy talking to them because I don’t have to do this song and dance number.

TL;DR read the room and speak in a way you’ll be understood. :)

iFred · 4 years ago
You’ll get there one day. I’ve kind of enjoyed the idea that downvoting is limited to a few, meaning someone that is greyed out must have said something very wrong or worthless. I’m guessing that gate keeping polls also means that they’re more of an event than low rent submission spam.
rendall · 4 years ago
Grayed out, to me, about as often means that someone expressed an opinion, and others disagreed with that opinion. Or they tried to joke and it fell flat.

¯\_ (ツ)_/¯

You're downvoted here because others disagree that someone that is greyed out must have said something very wrong or worthless, but that's not a very wrong or worthless opinion. Only naive, perhaps.

I try to use downvotes rarely, and only for "egregiously contravened community guidelines" not "expressed opinion I strongly disagree with"

Keep participating! Downvotes (and flags) are not generally a big deal if your overall contribution is positive

yjftsjthsd-h · 4 years ago
I know we're not supposed to comment on voting, generally, but this might be the... meta-funniest grayed-out comment I've ever seen.
binarysolo · 4 years ago
I want to upvote this just to keep it out of gray (since I do think it adds to the conversation), but this being grayed out is just hilarious on a meta level.
riidom · 4 years ago
And not greyed out anymore right after upvoting and responding.

The downvoting system is indeed pretty witty, in that few people can downvote, but a lot more can undo the downvotes. I can't downvote myself yet, but I do check downvoted comments and when I feel that it got downvoted by personal bias, I simply undo it.

And usually don't follow up on what happens next, so whether it gets downvoted again or not I wouldn't know. But then I don't care that much, in most cases.

Dead Comment

Deleted Comment

Deleted Comment

jrootabega · 4 years ago
At the time when I chose my answer, I did know. So I was obligated to pick "Yes".
crtasm · 4 years ago
"Did you know.." refers to before you read the question.
jrootabega · 4 years ago
NOW ya tell me!
doe88 · 4 years ago
Wow didnt know it could, found a list a past polls, with really popular ones.

https://hn.algolia.com/polls