Everyone is looking down on LLM-assisted dev here, but I think it's a great fit.
I also don't believe it can be one-shotted (there's too many deltas between Notion's API and Obsidian).
With that said, LLMs are great for enumerating edge-cases, and this feels like the perfect task for Codex/Claude Code.
I'd implore the obsidian team/maintainers to take a stab at building this with LLMs. Based on personal experience, the cost is likely within the same magnitude ($100-$1k in API cost + dev time), but the additional context (tests, docs, etc.) will be invaluable to future changes to either API surface.
Can't help but think if the author of that PR had been less defeatist and snarky they would have had a chance at decent discussion about it being a viable option (with AI).
In addition to what's already in the thread, I assume by now somebody has vibecoded an agent to scan GitHub for bounties and then automatically vibe up a corresponding solution. Will be a fun source of spam for anyone who wants to do the right thing and pay people for good work.
I recently got my first AI generated PR for a project I maintain and it was honestly a little stressful.
My first clue was that PR description was absurdly detailed and well structured... yet the actual changes were really scattershot. A human with the experience and attention to detail to produce that detailed description would likely also have broken it down into seperate PRs.
And the code seemed alright until I noticed a small one-line change: a UI component had been replaced with a comment that stated "Insantiating component now requires X"
Except the new insantiation wasn't anywhere. Their coding agent had commented out insantiating the component instead of figuring out dependency injection.
That component was the container for all of the app's settings.
-
It's interesting because the PR wasn't entirely useless: individual parts of it were good enough that even if I took over the PR I'd be fine keeping them.
But whatever coded it couldn't understand architecture well enough. I suspect whoever was piloting it probably tested the core functionality and assumed their small UI changes wouldn't break anything.
I hope we normalize just admitting when most of a piece of code is AI generated. I'm not a luddite about these tools, but it also changes how I'll approach a piece of code.
Things that are easy for humans get very hard for AI, and vice versa.
Not only admitting, it should be law to mark anything AI generated as AI generated. Even if AI contributed just a tiny bit. I dont want to use AI slop, and I should be allowed to make informed decisions based on that preference.
Having once used the Notion API to build an OPEN API doc generator, I pity whoever takes this on. The API was painful to integrate with, full of limitations and nowhere near feature parity with the Notion UI itself
Unless you've already done projects in both. Then, it might seem trivial? Idk. I haven't looked at either. But if there is such a person out there, with the spare time to look into it, they might be ideally suited!
Why? It doesn't say you need to have extensive experience with them. I would assume this is mostly to dissuade applicants that are not aware of the potential challenges ahead.
This "exploring" can take tremendous amounts of time, depending on the complexity of these APIs. My time is worth a lot to myself. I am not going to spend many hours for a chance of winning 5k$. If this takes a week off of my free time its not worth 5k to me.
Everyone is looking down on LLM-assisted dev here, but I think it's a great fit.
I also don't believe it can be one-shotted (there's too many deltas between Notion's API and Obsidian).
With that said, LLMs are great for enumerating edge-cases, and this feels like the perfect task for Codex/Claude Code.
I'd implore the obsidian team/maintainers to take a stab at building this with LLMs. Based on personal experience, the cost is likely within the same magnitude ($100-$1k in API cost + dev time), but the additional context (tests, docs, etc.) will be invaluable to future changes to either API surface.
Someone's given it a shot: https://github.com/obsidianmd/obsidian-importer/pull/424
Can't help but think if the author of that PR had been less defeatist and snarky they would have had a chance at decent discussion about it being a viable option (with AI).
[dead]
In addition to what's already in the thread, I assume by now somebody has vibecoded an agent to scan GitHub for bounties and then automatically vibe up a corresponding solution. Will be a fun source of spam for anyone who wants to do the right thing and pay people for good work.
I recently got my first AI generated PR for a project I maintain and it was honestly a little stressful.
My first clue was that PR description was absurdly detailed and well structured... yet the actual changes were really scattershot. A human with the experience and attention to detail to produce that detailed description would likely also have broken it down into seperate PRs.
And the code seemed alright until I noticed a small one-line change: a UI component had been replaced with a comment that stated "Insantiating component now requires X"
Except the new insantiation wasn't anywhere. Their coding agent had commented out insantiating the component instead of figuring out dependency injection.
That component was the container for all of the app's settings.
-
It's interesting because the PR wasn't entirely useless: individual parts of it were good enough that even if I took over the PR I'd be fine keeping them.
But whatever coded it couldn't understand architecture well enough. I suspect whoever was piloting it probably tested the core functionality and assumed their small UI changes wouldn't break anything.
I hope we normalize just admitting when most of a piece of code is AI generated. I'm not a luddite about these tools, but it also changes how I'll approach a piece of code.
Things that are easy for humans get very hard for AI, and vice versa.
Not only admitting, it should be law to mark anything AI generated as AI generated. Even if AI contributed just a tiny bit. I dont want to use AI slop, and I should be allowed to make informed decisions based on that preference.
Did you by any chance type this comment on a device that has autocorrect enabled?
Autocorrect is not generative AI in the way that anyone is using that word. Also autocorrect doesn't even need to use any sort of ML model.
Hurr durr Autocorrect is machine learning and you didn't mark you comment as AI generated hurr durr, get lost
Having once used the Notion API to build an OPEN API doc generator, I pity whoever takes this on. The API was painful to integrate with, full of limitations and nowhere near feature parity with the Notion UI itself
[dead]
> Please only apply if you have taken time to explore the Importer codebase, as well as the Notion API.
Suddenly 5k$ does not sound as good
Unless you've already done projects in both. Then, it might seem trivial? Idk. I haven't looked at either. But if there is such a person out there, with the spare time to look into it, they might be ideally suited!
Why? It doesn't say you need to have extensive experience with them. I would assume this is mostly to dissuade applicants that are not aware of the potential challenges ahead.
This "exploring" can take tremendous amounts of time, depending on the complexity of these APIs. My time is worth a lot to myself. I am not going to spend many hours for a chance of winning 5k$. If this takes a week off of my free time its not worth 5k to me.
People that have restructured medium sized software platforms will know why the time cost is 3 to 7 times higher than rewriting it from scratch.
Cleaning up vibe-code... lol :3
https://www.youtube.com/watch?v=aCbfMkh940Q