If you look at that thread, you'll see I've paid out quite a lot in bounties, somewhere around 50-60kUSD (the amount is not quite precise, because some bounties I completed myself without paying, and others I paid extra when the work turned out to be more than expected). In exchange, I did manage to get quite a lot of work done for that cost
You do get some trash, it does take significant work to review, and not everything is amenable to bounties. But for projects that already have interested users and potential collaborators, sometimes 500-1000USD in cash is enough motivation for someone to go from curious to engaged. And if I can pay someone 500-1000USD to save me a week of work (and associated context switching) it can definitely be worth the cost.
The bounties are certainly not a living wage for people, especially compared to my peers making 1mUSD/yr at some big tech FAANG. It's just a token of appreciation that somehow feels qualitatively different from the money that comes in your twice-monthly paycheck
Is this the standard way to do bounties, where you take applications and then choose someone to attempt the bounty? I always thought you'd just state the requirements and the bounty and then screen the submissions and chose a winner.
Granted this does feel a bit less like asking for spec work so I can see why they might have chosen to go this way instead of generically accepting bounties.
I only briefly glanced at your project, but it doesn’t look like a commercial offering or a component of one… what is your motivation for paying people to do this work? I would think bounties would be used more often by companies who need some open source feature for interoperability or integration purposes…
Having more money than free time but still wanting a thing to get done.
Lots of folks pay good money for hobbies (video games, golf fees, bicycle purchases, etc.).
Everyone is looking down on LLM-assisted dev here, but I think it's a great fit.
I also don't believe it can be one-shotted (there's too many deltas between Notion's API and Obsidian).
With that said, LLMs are great for enumerating edge-cases, and this feels like the perfect task for Codex/Claude Code.
I'd implore the obsidian team/maintainers to take a stab at building this with LLMs. Based on personal experience, the cost is likely within the same magnitude ($100-$1k in API cost + dev time), but the additional context (tests, docs, etc.) will be invaluable to future changes to either API surface.
> Everyone is looking down on LLM-assisted dev here, but I think it's a great fit.
From my own experience, I don't think that's the case. I've wrote a similar sync-script obsidian<->notion-databases myself some months ago, and I also used AI in the beginning; but I learned really fast what an annoying mess Notions API is, and how fast LLMs are hanging up on edge-cases. AI is good to get a start into the API, but at the end you still have to fix it up yourself.
LLMs are wonderful for migration. Also, are good at exploring APIs.
A month ago I migrated company's website and blog from Framer to Astro (https://quesma.com/blog/ is you would like to see the end result).
This weekend I created a summary of Grafana dashboard data. LLMs are tireless at checking hypothesis, running grunt code, seeing results, and iterating on that.
What takes more than a single is to check if the result is correct (nothing missed, nothing confabulated, no default fallbacks) and to maintain code quality (I refactor early and often, here is a place in Claude Code that there is no other way than using Opus 4.1). Most of my time spend talking with Claude Code ais in refactoring - and it requires most knowledge of tooling, abstraction, etc.
It’s funny reading these threads. A gif of a few clicks is evidence “it works” for the author. It’s like citing lines of code as a measure of your productivity. Appears impressive on first glance, but any expert will call you out as a snake oil salesman.
My guess is this is close to the level of testing they put forth for ensuring the AI generated code works (based off my experience with other AI heavy devs). They didn’t take any time to thoroughly review nor understand the code. A large file doesn’t necessarily mean shoddy work, but it certainly indicates it’s likely that.
Can't help but think if the author of that PR had been less defeatist and snarky they would have had a chance at decent discussion about it being a viable option (with AI).
I don’t get the llm shilling. If you think you can earn 50k with some prompts, then earn it. Why _instead_ shill for llms? Feels like stock traders having courses for how YOU could earn big bucks. They themselves have photos taken with one day rented Ferraris…
We all lean on LLMs in one way or another, but HN is becoming infested with wishful prompt engineers. Show, dont tell. Compete on the market instead of yet another PoC.
The bounty here is just $5k, and if you read my comment, I’m suggesting that the maintainer(s), even with LLMs, will likely spend a similar amount in inference + cost of their own time, however they’ll likely produce a solution more robust than what the bounty alone will produce.
To be clear: I’m not advocating that someone simply vibe-codes this up.
That’s already happening (2 PRs when this hit HN), both with negative feedback.
I’m suggesting that the maintainers should give LLM-assisted dev a try here, as they already have context on the Obsidian-side API.
Several years ago I made a bare metal notion to obsidian conversion scripts. At the time there wasn't any Bases available so databases was just turned in to csv table. It was relatively simple, no dependency python script and just export your notion notes as a zips of markdowns and then check every file to fix linking and the weird naming (with some caveat that not all links are properly exported to markdown links by notion)
Today I learned that obsidian have an API towards it. Still I wonder why it's not just easier to use notions "download your pages as markdown". Notion would very dislike an API that allows users to migrate away from it and probably actively would sabotage it. "download notes as markdown" is however a user service, which they probably don't want to break. (maybe now that they added an offline mode too late, I don't know)
(I work at Notion and built the html exporter during my hiring process work trial in 2019, opinion is my own)
I would love to two-way sync Notion <-> Obsidian vault. Notion is focused on online collaboration, Obsidian is focused on file-over-app personal software customization; there’s so much Obsidian can do especially with plugins that Notion isn’t able to address. If we can make the two work together even if it’s not perfectly seamless, it would extend usefulness of both tools by uniting their strengths and avoiding the tradeoff of their weaknesses.
If only I had an extra 24h per day I’d build it myself, but it needs some fairly complex machinery for change tracking & merging which would require ongoing support so it’s not something I can tackle responsibly as a weekend project.
At the least we could offer YAML frontmatter as an option for Notion’s markdown export feature. Maybe I can get to that today I have a few spare hours.
In the age of Claude Code having real-time collaboration + local-first markdown + easy to write custom plugins is the future. It makes no sense to lock up your documents in a saas product that gatekeeps your access to using AI on your own documents.
That's why I've been working Relay [0] - a privacy-preserving local-first collaboration plugin for Obsidian.
Our customers really like being able to self-host Relay Servers for complete document privacy while using our global identity system to do cross-org collaboration with anyone in the world.
I love Notion but the API leaves a bit to be desired. To build a 2 way sync the API should allow the same functionality that the UI offers. Right now there are some easy things I can’t figure out how to do in the API:
- add a date tag to a text block
- create a code block longer than 2000 characters
- set a code block to wrap
This is based on very limited interaction but it feels like I’ve hit many snags early on. I imagine things get harder when you get into some of the more advanced or newer functionality.
Please invest in the API. I will only love Notion more.
In addition to what's already in the thread, I assume by now somebody has vibecoded an agent to scan GitHub for bounties and then automatically vibe up a corresponding solution. Will be a fun source of spam for anyone who wants to do the right thing and pay people for good work.
I recently got my first AI generated PR for a project I maintain and it was honestly a little stressful.
My first clue was that PR description was absurdly detailed and well structured... yet the actual changes were really scattershot. A human with the experience and attention to detail to produce that detailed description would likely also have broken it down into seperate PRs.
And the code seemed alright until I noticed a small one-line change: a UI component had been replaced with a comment that stated "Insantiating component now requires X"
Except the new insantiation wasn't anywhere. Their coding agent had commented out insantiating the component instead of figuring out dependency injection.
That component was the container for all of the app's settings.
-
It's interesting because the PR wasn't entirely useless: individual parts of it were good enough that even if I took over the PR I'd be fine keeping them.
But whatever coded it couldn't understand architecture well enough. I suspect whoever was piloting it probably tested the core functionality and assumed their small UI changes wouldn't break anything.
I hope we normalize just admitting when most of a piece of code is AI generated. I'm not a luddite about these tools, but it also changes how I'll approach a piece of code.
Things that are easy for humans get very hard for AI, and vice versa.
> I hope we normalize just admitting when most of a piece of code is AI generated.
People using AI tools in their work is becoming normal. In the end, it doesn't matter how the code is created if the code works and is otherwise high quality. The person contributing is responsible for checking the quality of their contributions. Generally, a pull request that changes half the system for no good reason without good motivation is clearly not acceptable in most OSS systems. Likewise, a pull request that ignores existing design and conventions is also not acceptable. If you do such a pull request manually, it will probably also get rejected and get told off by repository maintainers.
The beauty of the pull request system is that it puts the responsibility on the PR creator to make sure their pull request is good enough. Creating huge pull requests is generally not appreciated and creates a lot of review overhead. It's also good practice to work via the issue tracker and discuss your plans before you start the work. Especially with bigger changes. The problem here is not necessary AI but but people using AI to create low quality pull requests and people not communicating properly.
I've not yet received any obvious AI generated pull requests on any of my projects. But I've used codex on my own projects for a few pull requests. I'd probably disclose that fact if I was going to contribute something to somebody else's code base and would also take the time to properly clean up the pull request and make sure it delivers as promised.
Not only admitting, it should be law to mark anything AI generated as AI generated. Even if AI contributed just a tiny bit. I dont want to use AI slop, and I should be allowed to make informed decisions based on that preference.
Having once used the Notion API to build an OPEN API doc generator, I pity whoever takes this on. The API was painful to integrate with, full of limitations and nowhere near feature parity with the Notion UI itself
Unless you've already done projects in both. Then, it might seem trivial? Idk. I haven't looked at either. But if there is such a person out there, with the spare time to look into it, they might be ideally suited!
Why? It doesn't say you need to have extensive experience with them. I would assume this is mostly to dissuade applicants that are not aware of the potential challenges ahead.
This "exploring" can take tremendous amounts of time, depending on the complexity of these APIs. My time is worth a lot to myself. I am not going to spend many hours for a chance of winning 5k$. If this takes a week off of my free time its not worth 5k to me.
- https://github.com/orgs/com-lihaoyi/discussions/6
If you look at that thread, you'll see I've paid out quite a lot in bounties, somewhere around 50-60kUSD (the amount is not quite precise, because some bounties I completed myself without paying, and others I paid extra when the work turned out to be more than expected). In exchange, I did manage to get quite a lot of work done for that cost
You do get some trash, it does take significant work to review, and not everything is amenable to bounties. But for projects that already have interested users and potential collaborators, sometimes 500-1000USD in cash is enough motivation for someone to go from curious to engaged. And if I can pay someone 500-1000USD to save me a week of work (and associated context switching) it can definitely be worth the cost.
The bounties are certainly not a living wage for people, especially compared to my peers making 1mUSD/yr at some big tech FAANG. It's just a token of appreciation that somehow feels qualitatively different from the money that comes in your twice-monthly paycheck
Granted this does feel a bit less like asking for spec work so I can see why they might have chosen to go this way instead of generically accepting bounties.
I posted a list of projects offering bounties elsewhere [1] in the thread.
[1] https://news.ycombinator.com/item?id=45278787
I also don't believe it can be one-shotted (there's too many deltas between Notion's API and Obsidian).
With that said, LLMs are great for enumerating edge-cases, and this feels like the perfect task for Codex/Claude Code.
I'd implore the obsidian team/maintainers to take a stab at building this with LLMs. Based on personal experience, the cost is likely within the same magnitude ($100-$1k in API cost + dev time), but the additional context (tests, docs, etc.) will be invaluable to future changes to either API surface.
From my own experience, I don't think that's the case. I've wrote a similar sync-script obsidian<->notion-databases myself some months ago, and I also used AI in the beginning; but I learned really fast what an annoying mess Notions API is, and how fast LLMs are hanging up on edge-cases. AI is good to get a start into the API, but at the end you still have to fix it up yourself.
A month ago I migrated company's website and blog from Framer to Astro (https://quesma.com/blog/ is you would like to see the end result).
This weekend I created a summary of Grafana dashboard data. LLMs are tireless at checking hypothesis, running grunt code, seeing results, and iterating on that.
What takes more than a single is to check if the result is correct (nothing missed, nothing confabulated, no default fallbacks) and to maintain code quality (I refactor early and often, here is a place in Claude Code that there is no other way than using Opus 4.1). Most of my time spend talking with Claude Code ais in refactoring - and it requires most knowledge of tooling, abstraction, etc.
My guess is this is close to the level of testing they put forth for ensuring the AI generated code works (based off my experience with other AI heavy devs). They didn’t take any time to thoroughly review nor understand the code. A large file doesn’t necessarily mean shoddy work, but it certainly indicates it’s likely that.
We all lean on LLMs in one way or another, but HN is becoming infested with wishful prompt engineers. Show, dont tell. Compete on the market instead of yet another PoC.
The bounty here is just $5k, and if you read my comment, I’m suggesting that the maintainer(s), even with LLMs, will likely spend a similar amount in inference + cost of their own time, however they’ll likely produce a solution more robust than what the bounty alone will produce.
To be clear: I’m not advocating that someone simply vibe-codes this up.
That’s already happening (2 PRs when this hit HN), both with negative feedback.
I’m suggesting that the maintainers should give LLM-assisted dev a try here, as they already have context on the Obsidian-side API.
Do you think the person you are replying to is Sam Altman?
Dead Comment
Today I learned that obsidian have an API towards it. Still I wonder why it's not just easier to use notions "download your pages as markdown". Notion would very dislike an API that allows users to migrate away from it and probably actively would sabotage it. "download notes as markdown" is however a user service, which they probably don't want to break. (maybe now that they added an offline mode too late, I don't know)
I would love to two-way sync Notion <-> Obsidian vault. Notion is focused on online collaboration, Obsidian is focused on file-over-app personal software customization; there’s so much Obsidian can do especially with plugins that Notion isn’t able to address. If we can make the two work together even if it’s not perfectly seamless, it would extend usefulness of both tools by uniting their strengths and avoiding the tradeoff of their weaknesses.
If only I had an extra 24h per day I’d build it myself, but it needs some fairly complex machinery for change tracking & merging which would require ongoing support so it’s not something I can tackle responsibly as a weekend project.
At the least we could offer YAML frontmatter as an option for Notion’s markdown export feature. Maybe I can get to that today I have a few spare hours.
That's why I've been working Relay [0] - a privacy-preserving local-first collaboration plugin for Obsidian.
Our customers really like being able to self-host Relay Servers for complete document privacy while using our global identity system to do cross-org collaboration with anyone in the world.
[0] https://relay.md
- add a date tag to a text block
- create a code block longer than 2000 characters
- set a code block to wrap
This is based on very limited interaction but it feels like I’ve hit many snags early on. I imagine things get harder when you get into some of the more advanced or newer functionality.
Please invest in the API. I will only love Notion more.
My first clue was that PR description was absurdly detailed and well structured... yet the actual changes were really scattershot. A human with the experience and attention to detail to produce that detailed description would likely also have broken it down into seperate PRs.
And the code seemed alright until I noticed a small one-line change: a UI component had been replaced with a comment that stated "Insantiating component now requires X"
Except the new insantiation wasn't anywhere. Their coding agent had commented out insantiating the component instead of figuring out dependency injection.
That component was the container for all of the app's settings.
-
It's interesting because the PR wasn't entirely useless: individual parts of it were good enough that even if I took over the PR I'd be fine keeping them.
But whatever coded it couldn't understand architecture well enough. I suspect whoever was piloting it probably tested the core functionality and assumed their small UI changes wouldn't break anything.
I hope we normalize just admitting when most of a piece of code is AI generated. I'm not a luddite about these tools, but it also changes how I'll approach a piece of code.
Things that are easy for humans get very hard for AI, and vice versa.
People using AI tools in their work is becoming normal. In the end, it doesn't matter how the code is created if the code works and is otherwise high quality. The person contributing is responsible for checking the quality of their contributions. Generally, a pull request that changes half the system for no good reason without good motivation is clearly not acceptable in most OSS systems. Likewise, a pull request that ignores existing design and conventions is also not acceptable. If you do such a pull request manually, it will probably also get rejected and get told off by repository maintainers.
The beauty of the pull request system is that it puts the responsibility on the PR creator to make sure their pull request is good enough. Creating huge pull requests is generally not appreciated and creates a lot of review overhead. It's also good practice to work via the issue tracker and discuss your plans before you start the work. Especially with bigger changes. The problem here is not necessary AI but but people using AI to create low quality pull requests and people not communicating properly.
I've not yet received any obvious AI generated pull requests on any of my projects. But I've used codex on my own projects for a few pull requests. I'd probably disclose that fact if I was going to contribute something to somebody else's code base and would also take the time to properly clean up the pull request and make sure it delivers as promised.
Dead Comment
That being said, yay open source bounties! People should do more of those.
1. Tenstorrent https://github.com/tenstorrent/tt-metal/issues?q=is%3Aissue%... $200 - $3,000 bounties
2. microG https://github.com/microg/GmsCore/issues/2994 $10,000 bounty
3. Li Haoyi https://github.com/orgs/com-lihaoyi/discussions/6 multiple bounties (already mentioned upthread)
4. Algora also hosts bounties for COSS (Commercial OSS) https://algora.io/bounties
Suddenly 5k$ does not sound as good
https://tinygrad.org/#worktiny