> Micro-combs - optical frequency combs generated by integrated micro-cavity resonators –
offer the full potential of their bulk counterparts, but in an integrated footprint. They have
enabled breakthroughs in many fields including spectroscopy, microwave photonics, frequency synthesis, optical ranging, quantum sources, metrology and ultrahigh capacity data
transmission. Here, by using a powerful class of micro-comb called soliton crystals, we
achieve ultra-high data transmission over 75 km of standard optical fibre using a single
integrated chip source. We demonstrate a line rate of 44.2 Terabits s−1 using the telecommunications C-band at 1550 nm with a spectral efficiency of 10.4 bits s−1 Hz−1
. Soliton
crystals exhibit robust and stable generation and operation as well as a high intrinsic efficiency that, together with an extremely low soliton micro-comb spacing of 48.9 GHz enable
the use of a very high coherent data modulation format (64 QAM - quadrature amplitude
modulated). This work demonstrates the capability of optical micro-combs to perform in
demanding and practical optical communications networks.
It is not particularly hard to do a "hero" experiment like this. Shannon limits to fiber transmission have pretty much been reached experimentally a long time ago. Muxing several wavelengths together is also of course the backbone of fiber optics transmission since its very origin. The current buzz in the field is to use micro-combs like they did as opposed to an array of lasers to provide the multiple wavelengths - but it still comes with its particular set of challenges to make it practical. The micro-comb provides the other wavelengths from a single source through a nonlinear process.
My understanding from just looking quickly at the paper is that they don't modulate all the wavelengths independently, meaning that they duplicate the info they send several times to reach that high terabit rate. The laser source is only one part of a transceiver, and once you have 400+ independent modulators/receivers, the laser source becomes a much smaller concern when it comes to make it practical. A conventional laser source can be made very compact too (in a semiconductor platform) and integrated with the rest of the transceiver on the same chip. This is still where the industry is putting its efforts. These micro-combs come with some disadvantages too, relating to stability, low SNR, and uneven power among the wavelengths (that then need to be equalized).
I agree with you that microcombs come with their specific kind of challenges (and SNR is the big one), however making many semiconductor lasers so small as too fit 100 on a single chip poses lots of challenges (in particular thermal management and wavelength stability). That said, combs (not necessarily microcombs) over opportunities for additional functionality/optimizations. Because the comb lines are locked to each other (normally individual lasers have small wavelength fluctuations) let's you space channels even closer together, as well as process multiple channels at the same time.
Regarding your comment about these hero experiments not being hard, I would argue it's actually the other way around, we are now so close to the limits that it is becoming incredibly hard to observe further gains. Also regarding not modulating lines independently, this is the common way how everyone (even the industry labs) demonstrates these systems, using 100 independent transceivers would be prohibitevely expensive, moreover research has shown that you actually receive a penalty from using this approach so the demonstration is a lower bound on what could be achieved with individual tx modules.
Modulating individual lines is the only way for such a scheme to become practical in a real environment and achieve the claimed data rates. Making hundreds of modulators and receivers fit within a single chip is about as hard as making hundreds of laser fit a single chip, hence why it's not realistically being pursued by the industry. My point is that if you already require separate chips for the rest of the transceiver, integrating the laser itself becomes much less of an issue, and the benefits of a single laser source common to all much more muted.
> As of early 2010, researchers have been able to multiplex over 400 wavelengths with the peak capacity of 171 Gbit/s per channel, which translates to over 70 Tbit/s of total bandwidth for a single fiber link!
> All of this is driving the need forincreasingly compact, low-cost and energy-efficient solutions
and
> The ability to supply all wavelengths with a single, compact integrated chip,replacing many parallel lasers, will offer the greatest benefits
So it's not really so much news in the sense that existing speeds over fiber have been improved, but instead in the sense that the speed produced by this single chip is a viable compact, low-cost and energy-efficient alternative to many parallel chips
I have this funny idea about a waveguide interconnect, where MIMO radios address each other inside the manifold. You could get pretty decent bus width through e.g QAM and with beam steering probably simultaneous data links.
Of course it could be made to look cool as hell, complex microwave plumbing with integrated heatsink replacing a plain old mainboard. :)
This has actually been a big research topic over the last 8 years or so. The keywords are space division multiplexing (SDM) and in particular Mode division multiplexing (MDM)
I think latency of a network will always lag behind that of local storage.
Even traveling at the speed of light, going around the circumference of the earth takes over 100ms. Obviously not all network requests go around the globe, but the fact that local storage is physically closer to your computer will always be a sizeable advantage.
I'm sending right now at home 7TB from my server to my NAS and it's taking aaages over my internal 1Gb/s ethernet network.
Am I right thinking that there are (still) no SOHO network switches that can handle faster speeds (at least 2Gb/s) that don't have active fans & don't get hot and that aren't super-expensive? The last time I checked, about 1 year ago, I didn't manage to find anything.
I've not used one and can't speak to their quality but:
> The CRS305 is a compact yet very powerful switch, featuring four SFP+ ports, for up to 10 Gbit per port. The device has a 1 Gbit copper ethernet port for management access and two DC jacks for power redundancy. The device is a very sleek and compact metallic case without any fans, for silent operation. [0]
Thank you - looks interesting, but they don't write in the specs if it has an active fan or not, right? (e.g. I see in the pics that this random model has fans https://mikrotik.com/product/crs328_4c_20s_4s_rm#fndtn-downl... but they're not mentioned in the specs)
Anything with multi-Gig or 10GbE is still quite expensive, unless you score a good deal on used enterprise gear that will definitely have screaming fans. There are a few switches that have mostly 1GbE ports and a few 10G ports and are fanless.
Define expensive? 10GbaseT is not that much anymore. Also a number of vendors are supporting 2.5/5G speeds. Ubiquity has some reasonable kit. I am planning on dropping a 10G into my FreeNAS box and getting a thunderbolt 10G for my MacBook. The idea that the 10GT PHY can work in an adapter is pretty cool. Back in the day I worked at a startup that did one of the first 48x10GT switches and the PHY was 5 watts each x 48. Sorting the cooling was fun. As DC switch the noise was fine, but working with the prototypes at your desk or in the lab was quite loud.
For switches of around 8 ports, 1GbE is about $2-3 per port. 10GbE over copper is about $70 per port, way higher than justified by the bandwidth increase alone. 10GbE is getting cheaper, sure, but it definitely isn't cheap yet. A 10GbE switch is still more expensive than all the equipment required for a 1GbE+WiFi home network.
What do they mean when they write "...with 4 10Gb SFP+ Uplinks"? Are they meant only to aggregate the traffic that comes from the 1GbE-ports or can they be used as well to exchange traffic between 4 servers, each one using 10GbE?
That switch should be able to do regular switching between its four 10Gb ports, but first you need to buy SFP+ transceivers to plug into those ports. 10G Fiber transceivers start around $20 per port, but 10G transceivers with RJ-45 ports for ordinary twisted-pair copper cabling are $40-70 per port. So to get that switch equipped to actually do 10Gb switching over copper would drive the price per port over $100.
Electro-optic conversion is expensive in terms of power, so you better be sure it's necessary. There are still some people looking at hybrid computers with both optics and electronics. To be practical, you'd need both to be realized in the same platform, but they don't exactly work on the same scales, and laser integration is a big issue.
That was over 1km utilising 7 cores. Typical fibre plants use one core per direction (transmit / receive).
This is over 75km utilising a single core per direction. IE this is actually something that has potential to be deployed in the world without having to replace all the existing fibre plants that already exist (eg undersea cables)
The article seems to be comparing a hero experiment to access rates. Why not at least compare to telecom backbone rates? You can do at least 1.6 Tbps per fiber, long haul, with commercially available gear.
> The highest commercial internet speed anywhere in the world is currently in Singapore, where the average download speed is 197.3 megabits per second (mbps).
I’m very surprised by this. I would have assumed the leading country would have had something a lot closer to gigabit. ‘Good enough’ must be the user reaction. Years of terrible connections have left me chasing down every last bit, even though fibre is now installed.
Because that assumption would means everyone is getting a GPON / Fibre Network. In reality even if 20% of the nation is still connected via ADSL, your average speed would have been significantly lowered.
I still think we haven't fully solved the last mile problem yet. Fibre installation still sucks for most people. And vast majority of new home dont have additional pipes for Fibre built in.
This isn't the kind of thing that we're going to see in home internet for a long time. The newest WiFi standard is only 10Gbps at best, and routers and wired standards aren't affordable over 10Gbps yet either for home use. For internet connections, there are currently already consumer standards that can do 10Gbps symmetric (like NG-PON2) which hopefully we see being deployed more widely soon. Even in 15 years I would be surprised if the high-end of available speeds for home connections are more than 2-5x that (Of course, companies that can pay for dedicated links can already get 100Gbps+ today).
The technology in the article, if commercially practical, would first go in to carrier networks and the larger enterprise market for backhaul transit links in the next few years, then over time filter down to general enterprise networking.
Even if transit providers upgrade, it wouldn't actually be a noticeable change, because they can already do this kind of link, just with a rack with dozens of laser modules that are optically multiplexed together. This does that in a single chip which would reduce cost a lot.
You aren't kidding, the bloated JavaScript libraries will make my browsing session feel like a blistering 55.6k bps instead of the piddly 11.4k bps they do now.
This improves the speed achieved with a single chip, not the actual maximum speed possible with fiber. We already have links that can go faster than this, this invention will likely just make those high speed links cheaper / more compact / easier to manufacture.
The research here is all about core and metro networks, so those are the networks connecting metropolitan areas to each other and the ones connecting the big users within metro areas. The home users and mobile users are not directly connected to these networks, but through e.g. your providers passive optical network. You can think about this similar to a road network, you are living on the little side roads, these are the big highways and ring roads. But because everyone is using more and more data on their home and mobile devices, there needs to be bigger pipes in the core network (despite more and more local data centres for caching)
This would never be for a home access point. This would be used for long-haul communications (i.e. between metro areas). The data rates there are already pretty ludicrous. The current standard is called 800G, for 800 Gbit/s per wavelength.
They said game streaming, which makes me think s/he's talking about things like Twitch and Mixer, livestreaming platforms that do depend on throughput for high quality video.
Both latency and bandwidth affect gaming. Latency is important to ensure that there isn't too much lag between you moving a controller stick and your character moving, and not as important for slower games which can handle this well. Bandwidth is important because it determines the resolution and quality you can stream at, so higher bandwidths would enable full HD or 4K gaming.
Given the cost of laying fiber lines across the ocean and this tech (appears) to double the capacity of an existing line, why would there not be a push to get this into use, what am I missing?
This is a field that's constantly being worked on, not sure why you say there isn't a "push" in it.
This specific thing is not faster than previous results, but more compact.
Long-distance fiber lines also have amplifiers along the way, so you can't just scale them up by changing the endpoints if it doesn't match the capability of the in-line hardware.
That is incorrect the amplifiers have the capacity to amplify lots of channels simultaneously. So it is sufficient to only upgrade the endpoints (unless your fibre is full, where full means the bandwidth of the amplifiers, ~100 Tb/s for a single fibre). This has in fact been the driver behind the tremendous growth in data rates we have seen in the last 30 years. Operators can incrementally upgrade links by upgrading the endpoints. To transfer a MB across the network was 100s of $ in the 90s and is now essentially free (it's like 10e-4 cents or sol
Cost of swapping the amplifiers, not just the end-points, makes sense as an issue. Thanks!
As for it being “only more compact” not a capacity increase, for a comparable single coherent optical fiber line, are existing fibers filled to capacity due to the limits of tech, economies, physics, etc. - if physics, then I assume all fibers are at capacity, right?
If you're going to lay a fiber across the oceans, then yeah, that capacity is going to approach the Shannon limit, but at some point there is a calculation to be made about how expensive it is to use all that capacity vs using multi-core fibers or just laying out more fibers.
The economics of it are pretty interesting. A single fiber (non-submarine) is about ¢8 a meter in raw cost, and it said that they laid out so many during the telecom bubble of the late 1990s that there are still many unused (so-called dark) fiber networks throughout the US. See for example https://www.ofsoptics.com/lighting-up-dark-fiber/
> The highest commercial internet speed anywhere in the world is currently in Singapore, where the average download speed is 197.3 megabits per second (mbps).
Err dude, you aren't reading. This is AVERAGE speed. The average internet speed in the US or UK is nowhere near 197 megabits. In the UK it's 28.9Mb and in the US 32Mb.
> The highest commercial internet speed anywhere in the world is currently in Singapore
This does not say "average". They don't use "average" until the 2nd half of the sentence. If that's what they meant, then they didn't communicate this clearly. For instance, this would not fly in a legal context.
Yeah, this confuses me. Gigabit is available in several places in North America. I had to check the date on the article.. posted 20 hours ago. Yep. Still confused.
> where the average download speed is 197.3 megabits per second (mbps).
Average is the key word there. Higher speeds may be available but just not used by many people die to cost.
In New Zealand for example 95% of the population has access to gigabit (with 10 gigabit being tested in places) speeds but the average download speed is only around 50 mbps due to most people opting for slower/cheaper plans.
My ISP in the Netherlands made slow plans only a few euro cheaper than fast ones. 50 mbit = 46.50, 250 mbit = 56.50, 500 mbit = 64.50, 1000 mbit = 76.50
I am paying 13€/mo for 600M down, 60M up, unlimited (France). The plan is actually 1Gb but this cannot be reached in my flat.
But the Internet is slow and/or unreliable in many places in the country side, when available at all. We are far from having these speed on average across the country.
Average speed, not the top available. Singapore has 5,6m citizens and 75% of them have an Internet connection. So an average speed of 197Mbps is pretty impressive.
When you ask about top of the averages, it becomes critically important at what scale you average and how you gerrymander. Given that Singapore is both a city and a country comparing it to either seems fair.
Much better, because what the heck is "internet speed". The most sensible definition to me is payload over IP protocol possibly on an existing commercial link. That's the only way I see relation to internet and the internet.
In the past, the "Internet speed record" was measured in units such as "terabit meters-per-second":
> ... they had managed to send nearly 840 gigabytes of data across a distance of 16,346 kilometers (10,157 miles) in less than 27 minutes, at an average speed of 4.23 gigabits per second.
> This was equal to 69,073 terabit meters per second (or 69,073 trillion bits sent through one meter in a second), which exceeded the previous record set by CalTech and CERN earlier this year. [0]
---
> The team successfully transferred data at a rate of 8.80Gbps, which is equal to 264,147 terabit-meters per second (Tb-m/s). [1]
---
> Internet2 ... has this week announced a stunning new record speed of 9.08Gbps - equal to 272,400 terabit-meters per second (Tb-m/s) [2]
---
No idea if it's still done that way or not but I don't see any mention of distance in this article (haven't looked at the paper).
Which is exactly why it was chosen, the 'purpose' of networks is moving data from point A to point B so the 'goodness' of networks is how much data from point A to point B and how far away is point A from point B.
Then the Internet became a transport for time sensitive data (movies, voice, Etc.) and so the latency between bits gets wedged in sometimes.
But it is directly internet related. If you check the actual paper [0] - I had to search for it - you will see this quote:
We demonstrate transmission over 75 km of fibre in the
laboratory as well as in a field trial over an installed
network in the greater metropolitan area of Melbourne,
Australia.
Technically that 75 km was between two different labs running on dark fiber. They state more detail in this quote:
These cables were routed from the labs access panels,
to an interconnection point with the AARNet’s fibre
network.