In fact, contrary things are so very often both true at the same time, in different ways.
Figuring out how to live in the uncomfortableness of non-absolutes, how to live in a world filled with dualisms, is IMO one of the primary and necessary maturities for surviving and thriving in this reality.
Yes. Unwillingness to accept contradicting data points is holding many people back. They have an unconscious need to always pick one or the other, and that puts them at a disadvantage. "I know what I think." But no, you do not.
Yes this is the issue. We truly have something incredible now. Something that could benefit all of humanity. Unfortunately it comes at $200/month from Sam Altman & co.
If that was the final price, no strings attached and perfect, reliable privacy then I might consider it. Maybe not for the current iteration but for what will be on offer in a year or two.
But as it stands right now, the most useful LLMs are hosted by companies that are legally obligated to hand over your data if the US gov. had decided that it wants it. It's unacceptable.
> legally obligated to hand over your data if the US gov. had decided that it wants it
Not to mention they could just sell it to the highest bidder, or simply use it to produce competition and put you out of business. Especially if you're using their service to do the development...
Tokens will become significantly more expensive in the short term actually. This is not stemming from some sort of anti-AI sentiment. You have two ramps that are going to drive this. 1. Increase demand, linear growth at least but likely this is already exponential. 2. Scaling laws demand, well, more scale.
Future better models will both demand higher compute use AND higher energy. We cannot underestimate the slowness of energy production growth and also the supplies required for simply hooking things up. Some labs are commissioning their own power plants on site, but this is not a true accelerator for power grid growth limits. You're using the same supply chain to build your own power plant.
If inference cost is not dramatically reduced and models don't start meaningfully helping with innovations that make energy production faster and inference/training demand less power, the only way to control demand is to raise prices. Current inference costs, do not pay for training costs. They can probably continue to do that on funding alone, but once the demand curve hits the power production limits, only one thing can slow demand and that's raising the cost of use.
> Context switching is very expensive. In order to remain efficient, I found that it was my job as a human to be in control of when I interrupt the agent, not the other way around. Don't let the agent notify you.
I have used a separate user, but lately I have been using rootless podman containers instead for this reason. But I know too little about container escapes. So I am thinking about a combination.
Would a podman container run by a separate user provide any benefit over the two by themselves?
Different worktrees can still lead to possible merge conflicts later. This may get messy.
Working on tests during compilation seems like a more separate task.
reply