Hacker Newsnew | past | comments | ask | show | jobs | submit | francescovv's commentslogin

> you need to "edit your makefile". That isn't going to work for distributions

Is it not? [st] requires exactly that. And it works for distros, from what I can tell - debian/ubuntu, arch, almost everybody seem to ship it just fine.

[st] https://st.suckless.org/


OK, maybe "not work well for distributions"?

Looking at Debian, they are distributing a series of patches to the Makefile. Getting away from autotools by having every distro patch your makefile might still be better than autotools, but I'd hope we might be able to do even better?


> Debian (...) are distributing a series of patches to the Makefile.

Where do you see that, sorry? I'm looking at the "Download Source Package" section here:

https://packages.debian.org/sid/stterm

...and the only patch on there is debian/patches/0001-fix-buffer-overflow-when-handling-long-composed-inpu.patch, which doesn't touch Makefie.


I wasn’t sure what the package was called, so I searched for ‘suckless’, and found this, with two Makefile patches.

https://sources.debian.org/patches/suckless-tools/46-1/


Hanlon's razor is useful to curb one's paranoia, but it is far from being a universal rule.

In fact, malice and incompetence are not necessarily mutually exclusive.

This very incident shows several instances where "Jia Tan" is being arguably incompetent, in addition to being clearly malicious: unintended breakage by adding extra space between "return" and "is_arch_extension_supported"; several redundant checks for `uname` == "Linux"; botched payload, so "test files" had to be replaced, with pretty fishy explanation; rather inefficient/slow GOT parsing, list goes on...


Wait, lens can affect colours?


Yes, the quality of the glass, number of lens elements, coating on the front element, internal reflections, aberrations and so on. It won't be affecting colors like red turns to blue, but more like red goes to a slightly different muted shade of red.


To expand on this a little: there is no lens which changes the frequency of light. What happens that due to chromatic aberration structures in a given colour could be less precisely focused than others and in the blur a different mix of frequencies (different shade of colours) could appear.


> What (...) happened to easily sending data, over the Internet

And for mobile phones - without internet is similar, unnecessarily hard. The other day I was hiking with friends, and wanted to share a .gpx file with the route, at some spot with no cell coverage. I thought: "I 'member, bluetooth can send files". Well, we spent good 15 minutes trying and miserably failed, that's no longer possible in the name of "security". So I had to wait for cell signal to come back and send the file via whatsapp. To someone standing right in front of me.


There are many ways of sending files between phones, none of them good.

Bluetooth can work, but it is slow as hell, and Apple doesn't support it.

Cloud services are convenient, but you not only you need some signal, but you also uses up your data plan.

If you have USB OTG support, you can simply use a thumb drive, like with your PC, but it is cumbersome and you need the hardware.

There are some somewhat proprietary systems like QuickShare and AirDrop, which are supposed are great when you have support which is not always the case.

Other options include having one phone act as a WiFi AP and host a local HTTP server, there are apps for that (ex: MiXplorer). A bit uncommon, but the advantage is that only one phone needs to do weird stuff, for the other, it is just downloading from a URL.

There are also apps like SyncThing based on P2P networks.

Generally, phones are pretty terrible at dealing with files. Their OS is designed around apps controlling their data rather than around interchangeable files like traditional desktop OSes. The way they want you to work is not by exchanging .gpx files but instead by using some built-in "share" feature of your hiking app. It may be .gpx under the hood, but they don't want the end user to see a file.


I agree and get your point.

But localsend has worked well for me. Yes, it requires an app but if we could get vendors to bundle that rather than a boatload of bloatware.

I know that it would be to optimistic to hope for Google.

See https://localsend.org/

Spread the word.


Localsend looks neat, but they're absolutely pissing into the wind by saying that it is "Open-source Airdrop" in the title in the Play store.

That's a clearcut case of absolutely willful trademark infringement, and it will not go well for the authors when (not if) Apple happens to cast their gaze in that direction.


>Bluetooth can work, but it is slow as hell, and Apple doesn't support it.

Wait, seriously? iPhones still don't support Bluetooth file transfers? I knew this was the case with the first iPhone but I just accepted that's a limitation from the OS being still an early release of a brand new product with limited functionality, I wouldn't have expect this to be the case after 15 years.

Anyway, I'm still baffled we never got a standardized Wi-Fi file transfer protocol, like with Bluetooth but at wi-fi speeds.

Oh right, nevermind, Apple wouldn't have supported anyway versus their own proprietary one that only works on Apple devices.


Airdrop? I'm pretty sure it's works without wifi or cellular.

(If you're talking iphone to Android, nvm)


Yeah, I meant cross platform file sharing, since that's what Bluetooth enabled originally, you could send files between whatever brand of phone and whatever brand of PC.


You first have to get a file from your iOS app. Good luck with that.


Hit the share button and select "Save to Files". Dead easy.


I use https://github.com/marcosdiez/shareviahttp all the time, to share with other phones or with my ereader or PC.


I recently had to have a non-technical person send me a very large file, and encountered this. There really is no good universal way to do this, even here in 2024! File is too big for E-mail, ftp is too big a technical hurdle for the guy. Dropbox requires accounts and sharing and all sorts of access shit for him to figure out. I ended up enabling WebDAV on an existing web server I have admin access to, and luckily he's on MacOS which makes it relatively straightforward to write a file to WebDAV. If he were on Windows I have no idea what I'd ask him to do, since I tried for 20 minutes to figure out how to actually connect to a WebDAV folder in read-write mode and Microsoft thwarted me at every turn.

It's pretty shocking that we don't have a dead-simple cross-platform "send a file to someone over the Internet" solution that doesn't involve cloud servers and accounts and downloading apps.


>File is too big for E-mail,

Back in the old days, we would have used PKZIP or RAR to split the file into smaller pieces.

>ftp is too big a technical hurdle for the guy

Back in the old days, even a secretary could figure out how to use the MS-DOS command line well enough to get her job done. I'm not sure what happened between then and now, but these days people just say "I'm non-technical" and refuse to learn anything that doesn't involve a GUI. FTP isn't hard, it's just a few commands, and it's much easier to tell someone how to send FTP than to use any GUI, since you can type out the exact commands to use:

1. ftp [ip address] 2. type username 3. type password 4. cd dir-to-drop-files 5. put filename 6. quit

These days, we'd use sftp anyway, but you could also use scp in a single command line which they could simply copy-and-paste from an email, after you have their temporary account set up.


> Back in the old days, we would have used PKZIP or RAR to split the file into smaller pieces.

Back then there were far fewer non-technical people using email anyway. In my experience people like office admins wouldn't do anything with split archives they would get the local computer person to do it for them (and the same would happen at the receiving end). They were generally aware of zip files, but might get a local tech to compress things for them even if a split archive wasn't needed.

Or they would just send a floppy in the post if the file would fit on there but the email limit at one end or both was less than 1440K. On one occasion because no one was free to help otherwise I remember a secretary driving a floppy between campusesses to deliver updated slides for a talk because she couldn't get it pushed through email (I forget if the biggest blocker was at the sending or receiving end).

Things changed quite a lot when decent GUI zip tools became common, and again when shell-integrated ones arrived in the Win95 era.

Flipping back through time to now, there are two problems that we didn't have back then:

1. Phones don't have particularly good+friendly archivers, and even if one is found security protections might mean said archiver can't access the data it needs from the app that generated it. As well as talking someone through the archiving process you might need to try work out how to open the relevant permissions and describe that to the remote non-technical user too.

2. Often desktop environments are locked down too, and Windows for instance doesn't come out of the box with an archiver, neither CLI nor GUI, that supports multi-volume archives or encryption worth bothering with.

> Back in the old days, even a secretary could figure out how to use the MS-DOS command line well enough to get her job done.

Some would do that, but I wouldn't say they were the majority by quite a margin in my experience. And not just secretaries: higher paid managers & such too. In fact them more so (as they would drop the task on a secretary, who would then enlist the office kid who knew some tech).

> FTP isn't hard, it's just a few commands, and it's much easier to tell someone how to send FTP than to use any GUI, since you can type out the exact commands to use:

The issue with that was (still is) that they will need telling each time. It isn't their job to do these technical things so if they don't have an active interest they aren't going to spend time learning it, so they can work out the exact sequence for next time unless it happens often enough that they learn by repetition. It isn't hard, but it isn't natural or familiar to many. It is only natural to you and I because we do that sort of thing regularly (or have done it enough in the past that it has become wired in).


>And not just secretaries: higher paid managers & such too.

No, the managers were lazy and insisted on having secretaries print everything out for them, and refused to touch a keyboard.

>In fact them more so (as they would drop the task on a secretary, who would then enlist the office kid who knew some tech).

In my experience in a college, the secretaries could handle their computer-based duties by themselves just fine. They only called for help when they ran into a problem they hadn't encountered yet. But doing basic file management at the MS-DOS command line (this was before Windows) was within their ability.

>The issue with that was (still is) that they will need telling each time.

Yeah, so what? ryandrake above said "I recently had to have a non-technical person send me a very large file, and encountered this", so that's why I countered with this in response to his claim that "ftp is too big a technical hurdle for the guy". It doesn't matter if it isn't natural or familiar; all you have to do is cut-and-paste some simple commands. If you can't do that, you're hopelessly stupid I think. You don't even have to understand the commands! That's my whole point with the utility of a command-line interface in situations like this: you can give someone some explicit commands, and they should be able to follow them exactly, simply by cutting-and-pasting or typing them in. It's not like a GUI interface where you have to try to tell them (perhaps over the phone) exactly where to click, which isn't reliable since you may not be able to exactly duplicate their environment or they might be using a different program or even OS. If you're somehow smart enough to use a computer with a GUI for years (and likely multiple computers: smartphone + PC), but you're too stupid to open a terminal window and cut-and-paste a one-line command, this sounds to me like "willful ignorance".

Anyway, the point is that this is fine for a rare occurrence like this. It doesn't sound like ryandrake has to deal with this problem with this same person on a daily basis.


Phones are able to capture MiBs per frame via their cameras, so I wonder if you could make an app that has both playback and recording of data, like displaying an animated QR-like code on one phone and recording it on another.


To me, this seems like a feature the app you're using with the GPX should add to its social settings with a Share With Friend(s) button. Specifically knowing that being in an environment where cell coverage might not be available, it can form adhoc wifi, and push across. Of course, they will want you to allow access to Contacts to know who your friends are. But you've probably already been requested for any of the permissions needed for this feature, because $REASONS


On Android builtin as Share -> Quick Share/Nearby share

Available since Android 6 (c.2015)


Have you tried it? It's horribly slow.


I've tried it.

I've had it be horribly slow, wickedly fast, and also absolutely fail to work at all.

The wickedly fast method seems to involve the two devices forming an ad-hoc WiFi network -- just for themselves. 802.11ac/ax is pretty darned fast when there's no competition, on a network with exactly two nodes, and when those two nodes are separated by only a couple of feet.

IIRC, this is not the default mode of operation.


I use it all the time. I guess I'm just patient :-)


Huh, I wonder what the problem was. I semi-regularly send files between my Windows laptop and my Android phone. I do recall never being able to get that functionality working properly with a Mac though.


That was between two Android phones, with receiving phone being "oppo" or some such, with stock firmware. Receiving phone would see incoming file request, ask user to accept, then error out with "unknown file", and no way to actually save it. I've sent files via bluetooth from lineageos to lineageos, no problem.


It also possible it just dropped it silently in the Downloads directory.

Otherwise ironical that all of this behaves like the 90s PalmOS which could not store a file unless a program would accept it.


I could totally see it being something stupid in how Android manages file associations or something. The default file managers tend to be fairly crippled and only actually show a subset of files present.


Perfect irony would be if the receiving Android attempts to check the incoming file against some kind of Internet service.


Pretty easy with iPhone+AirDrop. Just bring your iPhones together - it asks if you want to airdrop something, select file, done.


Briar can send files over Bluetooth.


Sometimes, I work with bi-directional amplifiers and distributed antenna systems that are intended to improve cellular coverage inside of a building where there may be little or none of that.

I have a fairly expensive meter at my disposal to use for planning things like this, which analyzes different cellular carriers by frequency, and can output (messy, and with unescaped commas for notes, but eventually-fucking-usable) CSV files of the results -- with GPS coordinates of the measurement location.

This sounds amazing for a person like me in this line of work. But it is not amazing for a person like me in this line of work.

(As a preface for the rest of this, remember: This meter is a tool that is meant to be used in areas of limited or zero cellular coverage -- places where outside RF is problematic for whatever reason.)

1. The meter has a Bluetooth interface that connects to an app on a pocket computer. (This part works fine, usually, except the app often doesn't background properly and silently dies if the user uses their pocket computer to do some other task, which might be fine if the problem was ever reported. [Haha!])

2. The meter expects the pocket computer to have an Internet connection, so it can use that to upload its findings to The Clown. (This part often cannot work, because the whole fucking reason any of this is happening is because cellular coverage is shit inside of a random building.)

3. The meter expects that the pocket computer will provide GPS coordinates, even though it is intended to be able to be used indoors -- without network connectivity, or perhaps even in a Faraday cage. And while modern pocket computers are very good at providing some location data by various means as long as there is internet connectivity or GPS-esque data, all of them fail at this when there is neither Internet nor GPS available. It produces an error [Haha!] when there is no location information available.

4. It does not provide useful errors. It provides errors, but they aren't specific at all and do not promote productive troubleshooting or workflow. ("Oh, there was a problem with your measurement! [Haha!]" is the singular error.)

5. Sometimes, it will even produce an error [Haha!] but record the measurement anyway -- and without recording the error.

6. It stores nothing locally. When an error happens [Haha! Good luck!], it is impossible to quickly see if anything was stored at all, so the only clear path is to repeat measurements that result in an error [Haha!]. This often results in redundant measurements being actually-recorded, but who would know that at the time of measurement. (These measurements often take about 4 minutes each, so these errors [Haha!] and repeated measurements can consume significant portions of an expensive workday.)

7. (Your main point): Exporting a CSV file of [whatever-the-hell was collected] is possible, as long as I want to send it to Google Drive or some other Clown-based service. The CSV is only a few tiny kilobytes at very most, but it won't let me copy the CSV to my pocket-computer's clipboard, or send it in an email, or save it locally on the pocket computer, or send it with Bluetooth to my laptop. It has to be exported to a Clown-based service, and then it can be read from that Clown-based service by some other device. There are no other options presented, unlike in so many other apps in my pocket computer.

8. Continued: While the maker of this meter device has their own Clown, and this Clown is clearly extent on the Internet, this Clown is completely inaccessible outside of their pocket-computer app. I cannot bypass Step 7 by any official means no matter how deep my desktop computing prowess may be.

It is completely shit, and it appears to be the best thing available on the market in this space. (And it isn't even Chinese shit: The company that produces this meter is in Utah.)


Alternative would be to have model itself aware of sensitive topics and and meaningfully engage with questions touching such topics with that awareness.

It might be a while till that is feasible, though. Until then, "content safeguards" will continue to feel like overreaching, artificial stonewalls scattered across otherwise kinda-consistent space.


> [Alpine] just do not link SSH against libsystemd

Arch doesn't either.

In fact, official releases of openssh-portable don't. One has to patch it for that. Debian and Fedora (as well as their downstreams) do apply such a patch [1]. Most other distros don't.

[1] https://sources.debian.org/src/openssh/1%3A9.7p1-2/debian/pa...


> 100s of dependencies. If there is a 1% chance of a random repo having a backdoor, the project will be compromised

Apologies for nit-picking, but that's not quite how sum-of-probabilities work. Total probability across 200 tries of 1% chance each, is ~87%:

  p=0
  for _ in range(200):
    p=p+(1-p)*.01
  print(p)

  0.8660203251420382
Your "sooner or later, to the point where we can assume" conclusion, still stands, of course.


Eh. I suppose 3 points, all minor:

1) Best of luck in an audit explaining that there is almost a 14% chance that your project is free of backdoors given reasonable assumptions. I recommend taking a photo of the auditor's expression and reporting back.

2) There are quibbles to be had about the IID assumption here; dependencies tend aren't selected randomly and attackers aren't targeting them randomly.

3) You don't need a for loop for that, you can calculate directly with `1-(0.99*200)`.


pow, not multiply


> transistors in an orderly matrix (...) forming scattered circuits connected by thin metal wires.

That's [ULA], isn't it? This tech was also known as "Gate Array", before FPGA came along.

[ULA] https://en.wikipedia.org/wiki/Uncommitted_logic_array


Yes, a ULA is another name for a gate array. A while ago I bought an 8086 chip on eBay that turned out to be a random ULA chip that was re-labeled: https://www.righto.com/2020/08/inside-counterfeit-8086-proce...


From that article:

  The book The ZX Spectrum ULA: How to design a microcomputer discusses [...]
 
I had no idea such a book even existed. Now I am really curious about what other hidden gems exist on your bookshelf.


I was intrigued, thinking that someone had managed to clone an 8086 with a gate array. What a disappointment.


I've read that some fake sound chips from eBay actually used a gate array. Not original parts, not a 100% equivalent drop-in, but close enough to work & convince most buyers. Don't know if true but sounds plausible. And if so, a (relatively) simple cpu wouldn't be much of a stretch.

Ken's article doesn't mention it, but there's another reason to use a gate array: pre-fabrication, and through that, easier stock-keeping.

Say you have ~100 different IC's like the one described. One could then do all the manufacturing steps to produce eg. 100k of those gate arrays, except the last fabrication step. And possibly keep a large stock of those 'blank' gate arrays.

Then (when it's known which of those 100 IC's is needed), apply only the last production step to those 'blanks', and presto: selected IC ready - in large volume if needed. Or produce IC's where original part has become hard-to-find.

For a mil-spec part, that flexibility might be among reasons to go for a gate array.

These days, something like a fuse-based FPGA might be used instead?


> Ken's article doesn't mention it

See footnote #5 :-)


Ah, righto! Also I see you elaborated more on this aspect in article on that fake 8086. For anyone interested:

https://www.righto.com/2020/08/inside-counterfeit-8086-proce...

As always: great stuff!


Ken's footnotes are usually my favorite part of his articles!


Yes it's a primitive form of gate array.

The thing is gate arrays are sort of something between a full custom part and an FPGA, they cost less per chip than a gate array, but more per chip than full custom. On the other hand NRE (up front 1 time cost)) is a lot lower than for full custom (I've built both).

These parts are mil-spec which means that their volumes will be lower, it may have made sense to build a mil-spec pad ring then spin out a range of low volume mil-spec 7400 parts from it


I have heard of this structure also referred to as a PLA - programmable logic array, and I think a ULA is specifically for a PLA that is embedded in the empty space of a bigger design (so that in case something is broken, a debug is one mask, not a full respin).

I am not sure that Wikipedia is out-of-date enough to have the precise terminology, but I also may be wrong.


The terminology is a bit of a mess, but usually a PLA is highly structured with an AND plane and an OR plane, so it implements sum-of-products logic. A gate array is more general, with arbitrary connections.


Excellent article, and sections "NAT Problems" and "NAT Solutions" are a good starter on that topic.

Except even third-choice solution is not always feasible. Reserving fixed RTP/UDP port range is not possible with carrier-grade NAT, which is quite common with residential ISPs and nearly-universal with cell ISPs.

Fourth-choice would be to reserve port range on a personal server (which would run B2BUA, asterisk in OP's case; or an RTP proxy), and force calls, including media, from/to SIP handsets to go via that.


All of the NAT problems would instantly to away with IPv6, but with adoption still at a meager 50% I suppose you'll need a PBX of some kind to receive at least half the calls.

For those stuck behind CGNAT, there are guides online for how to set up a VPN to a cheap VPS and forward all network traffic to your network so you can have almost-real connectivity at home. If you're content with 50mbps, you can even use Oracle's Always Free tier.


One often sees the STUN, TURN or ICE protocols around SIP-based VoIP, I believed they were supposed to help solve those issues?


Yes, Asterisk can poke holes in NAT on its own just fine. I was surprised how pessimistic the article is on this. I have systems running for months and years behind NAT with no issue. You might have to disable direct media (endpoint/disable_direct_media_on_nat).

Also, this is just uptime related tip not NAT, you must explicitly set registration/max_retries to a huge number otherwise Asterisk just gives up permanently at some point. It’s a really weird default.


Are you doing calls to/from other sip URIs that are also behind NAT, or just using your trunk and internal extensions?


Trunk and internal, and I usually put all the phones in their own VLAN w/o direct Internet access. I don’t really see a use for dialing arbitrary SIP URIs. If I need to add a remote phone I’ll just connect it directly with a network tunnel.


They don't always work...

The idea is if you send UDP packets to destination so arranged by middleman(STUN) or to a proxy so arranged by middleman(TURN) as an outgoing traffic, your Wi-Fi should be smart enough to set up a temporary NAT entry to allow responses to reach your $LOCAL_IP:$PORT. In reality, the Wi-Fi may have short memory or may be dying behind a refrigerator covered in dust and not able to handle all necessary combinations and ranges of addresses and ports, resulting in various partial failures such as one-way audio or missing participant in a group call.

Fifth-choice option is to just encapsulate everything into a VPN, preferably L2 VPN over HTTPS to a server on a global IP. If it isn't working, there must be no Internet.


Why would that be more reliable than TURN? If your router "forgets" about established streams half-way, your VPN will not stay connected either.


Makes it boolean. It's connected, or it's not. "One of RTP media transports to one of destinations is failing to establish DTLS ciphering and I think it has to do with either RTC issue or Chrome bug" is a self inflicted pain.


UDP is unreliable transport by specification, so I guess that if a network equipment such as a router cannot cope with the general workload, it would probably sacrifice UDP first without a second thought.


This is not how congestion control works on the internet.

Indeed TCP depends on packets getting dropped as the feedback mechanism for knowing when to slow down.

It's important that packets are dropped fairly, as otherwise on a loaded network only the preferred protocol(s) would keep working and the others would get starved. You don't want DNS to stop working when a HTTP flow is running at capacity on your link for example.


If you don't have any evidence, guessing that routers/modems prioritize IP packets based on the next protocol sounds like a conspiracy theory.


Huh? It's an obvious thing to do. If you have to drop a packet because your queues are full, any engineer with an IQ over 50 will pick the victim from the UDP packets, because the sender expects it might happen, and also because it won't necessary cause a retransmission - e.g. an RTP packet.


Why is that the obvious choice? TCP can recover through retransmission, UDP can not. Sounds just as logical of a choice to prioritize UDP and allow TCP connections to have a slowdown rather than allow UDP connections to have data loss.


As I said, application programmers expect and accept that their UDP packets might be lost or duplicated. This is sort of part of the contract. Even datagram integrity is in theory not guaranteed, as the checksum field of UDP is optional.

Sometimes people don't see a point at first in UDP because you eventually have to implement sequence numbers, CRCs, time-outs, retries, etc. that are similar to what TCP does. One can finds the reasons why one wants to do this anyway in [1]. In a nutshell, reliability is often insured by the application layer anyway so you don't need the transport protocol to do extra stuff you have no control over and might even get in the way (see the numerous esoteric ioctl and sysctl settings under Linux).

It is an obvious choice because, as I said again, a router dropping a packet does not necessary triggers a resend, e.g. RTP or syslog (over UDP). In TCP, this is guaranteed. If you are overloaded, you'd rather take the action you can get away with than probably just buy time.

[1] https://web.mit.edu/Saltzer/www/publications/endtoend/endtoe...


That some be lost is expected, that all of them be blocked is not.


@dang: sdf.org is a multi-tenant domain. It would be nice if HN's site link would treat it as such, i.e.:

- /from?site=thomask.sdf.org

- not /from?site=sdf.org


send the mods an email (link is in the page footer) instead of hoping they randomly see your comment


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: