Hacker Newsnew | past | comments | ask | show | jobs | submit | cientifico's commentslogin

Github says 2.8k files when selecting c (including headers...) https://github.com/search?q=repo%3Asystemd%2Fsystemd++langua...

If the project is even split in different parts that you need to understand... already makes the point.


Well to be fair, you don't need to understand how SystemD is built to know how to use it. Unit files are pretty easy to wrap your head around, it took me a while to adjust but I dig it now.

To make an analogy: another part of LFS is building a compiler toolchain. You don't need to understand GCC internals to know how to do that.


> Well to be fair, you don't need to understand how SystemD is built to know how to use it.

The attitude that you don't need to learn what is inside the magic black box is exactly the kind of thing LFS is pushing against. UNIX traditionally was a "worse is better" system, where its seen as better design to have a simple system that you can understand the internals of even if that simplicity leads to bugs. Simple systems that fit the needs of the users can evolve into complex systems that fit the needs of users. But you (arguably) can't start with a complex system that people don't use and get users.

If anyone hasn't read the full Worse Is Better article before, its your lucky day:

https://www.dreamsongs.com/RiseOfWorseIsBetter.html


LFS is full of packages that fit your description of a black box. It shows you how to compile and configure packages, but I don't remember them diving into the code internals of a single one.

I understand not wanting to shift from something that is wholly explainable to something that isn't, but it's not the end of the world.


No, its not the end of the world. And I agree, LFS isn't going to be the best resource for learning how a compiler works or cron or ntp. But the init process & systemd is so core to linux. I can certainly see the argument that they should be part of the "from scratch" parts.

You still build it from scratch (meaning you compile from source).. they don't dive into Linux code internals either.

They still explain what an init system is for and how to use it.


The problem is ultimately that by choosing one, the other gets left out. So whatever is left out just has one more nail in its coffin. With LFS being the "more or less official how-to guide of building a Linux system", therefore sysvinit is now essentially "officially" deprecated by Linux. This is what is upsetting people here.

I'm OK with that in the end because my system is a better LFS anyhow. The only part that bothers me is that the change was made with reservations, rather than him saying no and putting his foot down, insisting that sysvinit stay in regardless of Gnome/KDE. But I do understand the desire to get away from having to maintain two separate versions of the book.

Ultimately I just have to part ways with LFS for good, sadly. I'm thankful for these people teaching me how to build a Linux system. It would have been 100x harder trying to do it without them.


Linux is just a kernel, that does not ship with any sort of init system.. so I don't see how anything is being deprecated by Linux.

The LFS project is free to make any decisions that they want about what packages they're going to include in their docs. If anyone is truly that upset about this then they should volunteer their time to the project instead of commenting here about what they think the project should do IMO.


The whole point of LFS is to understand how the thing works.

nothing is actually stopping people from understanding systemd-init except a constant poorly justified flame war. it's better documented than pretty much everything that came before it.

Looking for postgres unikernel seems like some people are trying seriously...

https://nanovms.com/dev/tutorials/running-postgres-as-a-unik...


I don't know how it is in US, but in Europe, the amount of scams is growing. Twitter blue checkmark was created to distinguish real humans vs scammers.

The fine was to protected the users from that scam.

I like paying taxes to protected the users that don't have the ability to detect scams as we all here have (most of the time).

EU miss the point equally to the Congress in uuss when non tech people believe they can rule (or just lobbied).

But on this case, there will be no problem if Twitter had decided to use another checkmark for pro accounts.


That it's an osx ONLY app.


MacOS, iOS, Windows, and Linux


I was going to comment on the Mac exclusivity too which might be a bad idea now that Linux is on the rise. But you're right, there's a Linux beta too now. Thanks for the pointer.


One hidden gem.

The closest free alternative is https://www.mitmproxy.org/ that is not even close.

And off course, https://www.wireshark.org/ but that is too generic and with a bigger learning curve.

Worth the money. And no subscription (or there weren't a subscription back then)


I built a bad clone of Charles Proxy over the summer as part of another project (iOS VPN -> mitm with custom root certificate -> logging). It's surprisingly simple. It basically goes App -> Packet tunnel -> SOCKS -> a child process (I used https://github.com/AdguardTeam/gomitmproxy) to handle the sniffing and reencryption.

Did post the source somewhere at some point but my git server got corrupted and I haven't gone and fixed it. https://github.com/acheong08/apple-corelocation-experiments/...

I wonder if AI is good enough to vibe code my horrible hacks into a full clone of Charles Proxy these days.

Annoying fact: Apple requires you to have a paid developer account to access the Packet Tunnel APIs. You can't even test it in XCode simulator because of how networking works in there. It's insane that I can't even develop for my own phone without paying an extra fee to Apple. The error message when you sideload without a paid account doesn't make it obvious at all and it took me a good day or two before realizing .


> It's insane that I can't even develop for my own phone without paying an extra fee to Apple.

A Linux phone can’t come fast enough. Yes there is at least one, on ancient hardware. IMO a viable Linux phone requires hardware at most one generation old.


That Linux phone is called Android. It runs plenty fine enough even without GApps (or with shims like microg), and the sheer amount of engineering needed to make baseline linux even usable as a phone system is over a dozen years away.

Android with binder is a strictly superior architecture that anything else that has come for strict isolation. As a bonus, it's battle tested, and latest Android phones just... run linux. You can have a shell and GTK if you so desire.


When you say "just... run linux", are you referring to termux, or something else ? How do you run a linux userspace in Android ?


I mean a fully fledged regular debian

https://www.linuxjournal.com/content/bringing-desktop-linux-...

https://source.android.com/docs/whatsnew/android-16-release#...

While this is mostly a KVM setup, there's nothing specific about Android that prevents a linux userspace from running in there. Each app is almost one already. Most of its core components have been integrated into linux's main repository (like binder), and AOSP isn't that far off from a regular Linux. Sure, zygote, user & power management are not exactly a standard install, but they're not that crazy either


Okay, so suppose I want a linux and not an android phone, so I get an android phone, disable login password etc, and delete everything except "Linux Terminal" and put my linux there.

What sort of tradeoffs would I see? Performance? Battery life? Security (secure enclave access?)


That’s all very convincing. For users who just want a Linux phone? Not there yet. Android or not.


Aside from a misplaced obstination to have _Linux_ as the base for your phone with all the awful power management, high energy use, bad governors, terrible process isolation and fleeing security holes everywhere in a phone that most of the times contains access to your entire life, what does Linux give you that Android doesn't? Both are FOSS.


I do a lot of work in similar areas here.

While vibe coding will get you something that potentially works, I've noticed LLMs are really bad at cleanly abstracting across multiple layers in this area. They usually will insist on parsing and serializing every field at every layer.

If you have the protocols/interfaces well defined up front it is very fast at building extensions, analytics or visualizations though.


> I've noticed LLMs are really bad at cleanly abstracting across multiple layers

Which makes sense, as most developers are too (it’s a particular non-trivial skill and rarely modeled wrll), so LLMs are more likely to be trained on muddled multiple layers.


mitmproxy/mitmweb offer a WireGuard server implementation to do pretty much this. You can grab any existing WireGuard VPN, scan a QR code to import the VPN config, and start monitoring (after installing the MITM certificate, of course).

The packet tunnel story is crazy. I'm glad Android allows you to just use network APIs without question as a developer.


That's what I usually use. The packet tunnel method is used if you want everything to be fully local. My plan was to make an app that can locally spoof your location on iOS without a third party able to MITM.


I had excellent experiences w mitmproxy (and mitmdump) in 2016-17. At that point it was powerful and easily scriptable, making it far superior to charles for my purposes.


Agreed, I used to have a bunch of mitm commands in my bashrc to easily intercept https messages


I'd used mitmproxy to reverse engineer browser extensions and mobile apps and it did the trick. It was quite some time ago.


Burp is free too (community edition)

https://portswigger.net/burp/communitydownload


What I really like about mitmproxy is that it runs on my server with a certificate I trusted on my phone.

I then flip on WireGuard on my phone, pointed to mitmproxy, and seamlessly all traffic from my phone is decrypted and viewable through the website on my computer.

Except of-course all the applications these days that do certificate pinning, which is annoying, but for that we have Frida.


mitmproxy isn't the gold standard; it is Burp Suite, sadly.

Burp Suite uses a subscription model. Charles a model like Sublime Text: you buy it and get to keep the version forever, major upgrades available for a discount.

I had to chuckle at this one:

> If you purchased a Charles license prior to 1 May 2008 your existing license key is still valid for Charles 5.

So I guess in past they used a model where you'd have lifetime upgrades.

Which also made me think: I recognize this name! This has to be an older piece of software. Was it published on Freshmeat in the start of this century?

There's also some TUI for Wireshark, such as frontends for tshark. I think [1] looks interesting, since it can be used with a local LLM (via Ollama).

[1] https://github.com/kspviswa/pktai


mitmproxy supports quite a few features that Charles doesn't and vice versa. You could use them as alternatives for basic browser traffic analysis (where they're both fine), but their features and capabilities cover different areas. Charles is user friendly and robust, mitmproxy has advanced scripting capabilities with a decent amount of community examples available. They complement each other.


- mitmproxy (the Docker version is really easy to set up)

- Burp Proxy

- Wireshark, tshark


Wireshark is extremely powerful and useful but it lives in a completely different category of tools. It's not a proxy so it can't modify traffic or inspect HTTPS [1], it's used to passively capture and analyze general network traffic and troubleshoot networking issues.

[1] without an elaborate setup, your program needs to be instructed to dump TLS encryption keys for Wireshark to read


What about ZAP? https://www.zaproxy.org/


I was a daily user of mitmproxy, until they changed all they keybindings around version 2. Tried a couple of times to get used to the new “TMUX” style, but switched to Charles Proxy.

Have mitmproxy gotten any better in usability over the years?

Just based on the images, is seems to have the same problems?


> Have mitmproxy gotten any better in usability over the years?

The new-ish "Local Capture" and "WireGuard"-mode are quite nice.

And running e.g. `mitmproxy --ignore-hosts '.*' --show-ignored-hosts` [1] for monitoring apps with certificate pinning also a new feature

[1] cmd will turn mitmproxy into a "non-MITM proxy" but do show domains (SNI) the app is connecting to.


I generally prefer mitmweb, the web frontend for mitmproxy. I don't have much of a problem with their tmux-like UI, but I find mitmweb a lot easier to use than the keyboard shortcut based terminal navigation.


Same experience. The V1 and V2 was simple to use to clear, start capture, navigate etc. Everything felt broken after the switch, for the trade off to get more features?

Maybe I should do a fork and try to fix it again


Fiddler is superior to Charles and always has been.

https://www.telerik.com/fiddler


Did you just call Charles Proxy a "hidden" gem? :)


Services (or a set of Microservices) should mimic teams at the company. If we have polytree, that should represent departments.


Microservices should have clear owners reflected in the org chart, but the topology of dependencies should definitely not be isomorphic to your org chart.


That was the same conclusion I got by playing with the graphs.

I concluded that better IO planning it's only worth it for "slow" I/O in 18.

Pretty sure it will bring a lot of learnings. Postgress devs are pretty awesome.


I wouldn't generalize it that much. There are few patterns where Turbo Streams, subscriptions, and permanent frames still make a lot of sense.

One classic case is user notifications - like the user icon in the corner. That's perfect as a permanent lazy frame, with a subscription watching for any user-related updates. This way you don't have to think about updating that widget across different pages, and you can centralize all user-related async events in

one controller.

Another pattern is real-time dashboards. You never know which part of the dashboard will change, and it's actually simpler on the backend: you just track what was updated and push that specific part. Clean and efficient.


Yeah, these are examples of situations I was referring to, where it makes sense.


Been using Turbo (and Turbolinks before it) for 10+ years, mostly outside Rails. It's awesome in that context.

Can't really see how making it more Rails-centric would help - more likely it'd just cause a fork for everyone using Hotwire without Rails/Ruby.


Killing all the fun.

Remember when you could trick a colleague into posting in Twitter, Facebook... by just sending a link?

CSRF fixes are great for security - but they've definitely made some of the internet's harmless mischief more boring


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: