Ceph has synchronous replication, writes have to be acked by all replicas before the client gets an ack. Fundamentally, the latency of ceph is at least the latency between the OSDs. This is a tradeoff ceph makes for strong consistency.
I know. We run it for near a decade now. I mentioned it because a lot of uses for minio are pretty small.
I had 2 servers at home running their builtin site replication and it was super easy setup that would take far more of both hardware and work to replicate to ceph so while ceph might be theoretically fitting feature list, realisticially it isn't an option.
The Ceph way of doing asynchronous replication would be to run separate clusters and ship incremental snapshots between them. I don't know if anyone's programmed the automation for that, but it's definitely doable. For S3 only, radosgw has it's own async replication thing.
If you setup ceph correctly (multiple failure domains, correct replication rules across failure domains, monitors spread across failure domain, osds are not force purged) it is actually pretty hard to break it. Rook helps a lot too as rook makes it easier to set up ceph correctly.
Regarding aistore the recommended prod configuration is kubernetes, which brings in a huge amount of complexity. Also, one person (Alex Aizman) has about half of the total commits in the project, so it seems like the bus factor is 1.
I could see running Aistore in single binary mode for small deployments, but for anything large and production grade I would not touch Aistore. Ceph is going to be the better option IMO, it is a truly collaborative open source project developed by multiple companies with a long track record.
That is an easy way to game the whole system. Create a bunch of accounts and repos, cross vouch across all of them, generate a bunch of fake AI PRs and approve them all because none of the repos are real anyway. Then all you need is to find a way to connect your web of trust to a wider web of trust and you have a whole army of vouched sock puppet accounts.
$200/month is peanuts when you are a business paying your employees $200k/year. I think LLMs make me at least 10% more effective and therefore the cost to my employer is very worth it. Lots of trades have much more expensive tools (including cars).
I think it depends on the tasks you use it for. Bootstrapping or translating projects between languages is amazing. New feature development? Questionable.
I don’t write frontend stuff, but sometimes need to fix a frontend bug.
Yesterday I fed claude very surgical instructions on how the bug happens, and what I want to happen instead, and it oneshot the fix. I had a solution in about 5 minutes, whereas it would have taken me at least an hour, but most likely more time to get to that point.
Literally an hour or two of my day was saved yesterday. I am salaried at around $250/hour, so in that one interaction AI saved my employer $250-500 in wages.
AI allows me to be a T shaped developer, I have over a decade of deep experience in infrastructure, but know fuck all about front end stuff. But having access to AI allows me as an individual who generally knows how computers work to fix a simple problem which is not in my domain.
Maybe this is a gray area, but that's kind of my experience with it too. I understand what I want to happen, but don't understand the language and it produces a language specific result that is close enough, maybe even one-shot, for me to use. I categorize this under translation.
My process, which probably wouldn't work with concurrent agents because I'm keeping an eye on it, is basically:
- "Read these files and write some documentation on how they work - put the documentation in the docs folder" (putting relevant files into the context and giving it something to refer to later on)
- "We need to make change X, give me some options on how to do it" (making it plan based on that context)
- "I like option 2 - but we also need to take account of Y - look at these other files and give me some more options" (make sure it hasn't missed anything important)
- "Revised option 4 is great - write a detailed to-do list in the docs/tasks folder" (I choose the actual design, instead of blindly accepting what it proposes)
- I read the to-do list and get it rewritten if there's anything I'm not happy with
- I clear the context window
- "Read the document in the docs folder and then this to-do list in the docs/tasks folder - then start on phase 1"
- I watch what it's doing and stop if it goes off on one (rare, because the context window should be almost empty)
- Once done, I give the git diffs a quick review - mainly the tests to make sure it's checking the right things
- Then I give it feedback and ask it to fix the bits I'm not happy with
- Finally commit, clear context and repeat until all phases are done
Most of the time this works really well.
Yesterday I gave it a deep task, that touched many aspects of the app. This was a Rails app with a comprehensive test suite - so it had lots of example code to read, plus it could give itself definite end points (they often don't know when to stop). I estimated it would take me 3-4 days for me complete the feature by hand. It made a right mess of the UI but it completed the task in about 6 hours, and I spent another 2 hours tidying it up and making it consistent with the visuals elsewhere (the logic and back-end code was fine).
So either my original estimate is way off, or it has saved me a good amount of time there.
New feature development in web and mobile apps is absolutely 10% more productive with these tools, and anyone who says otherwise is coping. That's a large fraction of software development.
Yes, the research is wrong. And in science, it's not taboo to call that out.
It's outdated, doesn't differentiate between people trying to incorporate it in their current workflow and the people who apply themselves to entirely new ones. It doesn't represent me in any way and I am releasing features to my platform daily now, instead of weekly. So I can wholeheartedly disagree with its conclusion.
The earth is either flat of it isn't. It's easy to proof it's not flat. It's not easy to conclude that the results of a study in a field that changes daily represents all people working in it, including the ones who did not participate.
If it is so self-evident that the research is wrong, that means there should be some research that supports the opposite conclusion then? Maybe you can link it?
The reason we don’t see any other research is because it’s neigh impossible to study a moving field. Especially at this pace.
If you have any ideas on how to measure objectively while this landscape changes daily, please share them with us. Maybe a researcher will jump on this bandwagon and proof you right.
I proposed a logically consistent perspective where both my experience and the study are true at the same time? What is your response to that other than comparing me to a flat earther? Do you have something useful to contribute?
Honestly, that is a “skill issue” as the kids these days say. When used properly and with skill, agents can increase your productivity. Like any tool, use it wrong and your life will be worse off. The logically consistent view if you want to believe this study and my experience is that the average person is hindered by using AI because they do not have the skills, but there are people out there who gain a net benefit.
It drives me nuts that people take the mean of AI code generation results and use that to make claims about what AI code generation is possible of. It's like using the mean basketball player to argue that people like LeBron and Jordan don't exist.
For sure. I like having discussions with nuanced takes, these are tools with strengths and weaknesses and being a good tool user includes knowing when not to pick it up.
It’s a skill issue, which means you can’t fire any of your highly skilled employees, which means it has the same value as any other business organization tool like Jira or Microsoft Excel, approximately $10-20 per user per month.
Autodesk Fusion for manufacturing costs less than Claude Max and you literally can’t do your job without it.
So Autodesk takes you from 0 to 100% productivity for under $200 a month and companies are expected to pay $200+ to gain an extra 10-20%?
That math isn’t how it works with any other business logic tools.
Sounds like we need an open source index fund where you can make one payment that goes into a pool of money which is invested into the top 1000 open source projects.
That is a great recipe for systematic discrimination.
reply