All the stuff, I’ve been hoarding over the past year for the newsletter.
Seth Godin on the ethical use of our newly gained technological capabilities
No wonder we’re a bit dizzy. We just multiplied our minds by many orders of magnitude. It’s easy to confuse someone else’s memory (or manipulation) with our hard-earned ability to remember things that actually happened to us.
And we’re now realizing that we have the power (and perhaps the obligation) to use shared knowledge to make better, more thoughtful decisions. And to intentionally edit out the manipulations and falsehoods that are designed to spread, not to improve our lives.
Intel to Build Silicon for Fully Homomorphic Encryption
When considering data privacy and protections, there is no data more important than personal data, whether that’s medical, financial, or even social. The discussions around access to our data, or even our metadata, becomes about who knows what, and if my personal data is safe. Today’s announcement between Intel, Microsoft, and DARPA, is a program designed around keeping information safe and encrypted, but still using that data to build better models or provide better statistical analysis without disclosing the actual data. It’s called Fully Homomorphic Encryption, but it is so computationally intense that the concept is almost useless in practice.
So whether that means combining hospital medical records over a state, or customizing a personal service using personal metadata gathered on a user’s smartphone, FHE at that scale is no longer a viable solution. Enter the DARPA DPRIVE program.
DARPA: Defense Advanced Research Projects Agency
DPRIVE: Data Protection in Virtual Environments
Intel has announced that as part of the DPRIVE program, it has signed an agreement with DARPA to develop custom IP leading to silicon to enable faster FHE in the cloud, specifically with Microsoft on both Azure and JEDI cloud, initially with the US government. As part of this multi-year project, expertise from Intel Labs, Intel’s Design Engineering, and Intel’s Data Platforms Group will come together to create a dedicated ASIC to reduce the computational overhead of FHE over existing CPU-based methods. The press release states that the target is to reduce processing time by five orders of magnitude from current methods, reducing compute times from days to minutes.
Programmable optical quantum computer arrives late, steals the show
Excuse me a moment—I am going to be bombastic, over excited, and possibly annoying. The race is run, and we have a winner in the future of quantum computing. IBM, Google, and everyone else can turn in their quantum computing cards and take up knitting.
One key to quantum computing (or any computation, really) is the ability to change a qubit’s state depending on the state of another qubit. This turned out to be doable but cumbersome in optical quantum computing. Typically, a two- (or more) qubit operation is a nonlinear operation, and optical nonlinear processes are very inefficient. Linear two-qubit operations are possible, but they are probabilistic, so you need to repeat your calculation many times to be sure you know which answer is correct. A second critical feature is programmability. It is not desirable to have to create a new computer for every computation you wish to perform. Here, optical quantum computers really seemed to fall down. An optical quantum computer could be easy to set up and measure, or it could be programmable—but not both.
So, what has changed to suddenly make optical quantum computers viable? One is the appearance of detectors that can resolve the number of photons they receive. A second key development was integrated optical circuits. performance has gotten much, much better. Integrated optics are now commonly used in the telecommunications industry, with the scale and reliability that that implies.
The researchers, from a startup called Xanadu and the National Institute of Standards, have pulled together these technology developments to produce a single integrated optical chip that generates eight qubits. The internal setting of the interferometer is the knob that the programmer uses to control the computation. In practice, the knob just changes the temperature of individual waveguide segments. But the programmer doesn’t have to worry about these details. Instead, they have an application programming interface (Strawberry Fields Python Library) that takes very normal-looking Python code. This code is then translated by a control system that maintains the correct temperature differentials on the chip.
What is more, the scaling does not present huge amounts of increased complexity. In superconducting qubits, each qubit is a current loop in a magnetic field. Each qubit generates a field that talks to all the other qubits all the time. Engineers have to take a great deal of trouble to decouple and couple qubits from each other at the right moment. The larger the system, the trickier that task becomes. Ion qubit computers face an analogous problem in their trap modes. There isn’t really an analogous problem in optical systems, and that is their key advantage.
Two exhaustive articles on the historical significance of the Arpanet and how the protocol worked
- https://twobithistory.org/2021/02/07/arpanet.html
- https://twobithistory.org/2021/03/08/arpanet-protocols.html
This is what was totally new about the ARPANET. The ICCC demonstration didn’t just involve a human communicating with a distant computer. It wasn’t just a demonstration of remote I/O. It was a demonstration of software remotely communicating with other software, something nobody had seen before.
So what I’m trying to drive home here is that there is an important distinction between statement A, “the ARPANET connected people in different locations via computers for the first time,” and statement B, “the ARPANET connected computer systems to each other for the first time.” That might seem like splitting hairs, but statement A elides some illuminating history in a way that statement B does not.In a section with the belabored title, “Technical Aspects of the Effort Which Were Successful and Aspects of the Effort Which Did Not Materialize as Originally Envisaged,” the authors wrote:
Possibly the most difficult task undertaken in the development of the ARPANET was the attempt—which proved successful—to make a number of independent host computer systems of varying manufacture, and varying operating systems within a single manufactured type, communicate with each other despite their diverse characteristics.
There you have it from no less a source than the federal government of the United States.
The 1Password blog with a high level overview of their Smart Password Generator
Long, random passwords just aren’t convenient. If you need to enter 45 randomly-generated characters on another device often enough, you’ll inevitably change that password to something like password123 because it’s easy to type and remember. It’s also - you got it - not strong.
While a lengthy, unintelligible password may appear stronger than a smart one, it’s mainly illusion. Pronounceable syllables make a smart password look human generated and, therefore, weaker. But a human-generated password could never be chosen uniformly and, therefore, can’t be accurately assessed for entropy.
We’ve made a compromise of sorts. We’ve sacrificed a few bits of (theoretical) entropy, that don’t affect real-world security, to gain a whole lot of convenience, compatibility, and accessibility — and those certainly are real world, which is what really matters.
Mahmoud Hashemi talks about Changing the Tires on a Moving Codebase
We realized that CI is more sensitive than most users for most of the site. So we focused in on testing the highest impact code. What’s high-impact? 1) the code that fails most visibly and 2) the code that’s hardest to retry. You can build an inventory of high-impact code in under a week by looking at traffic stats, batch job schedules, and asking your support staff.
…
We realized that CI is more sensitive than most users for most of the site. So we focused in on testing the highest impact code. What’s high-impact? 1) the code that fails most visibly and 2) the code that’s hardest to retry. You can build an inventory of high-impact code in under a week by looking at traffic stats, batch job schedules, and asking your support staff.And it really is important to develop close ties with your support team. Embedded in our strategy above was that CI is much more sensitive than a real user. While perfection is tempting, it’s not unrealistic to ask a bit of patience from an enterprise user, provided your support team is prepared. Sync with them weekly so surprise is minimized. If they’re feeling ambitious, you can teach them some Sentry basics, too.
My main takeaways
- Everyone has a plan ’till they get punched in the mouth, — Mike Tyson
- Prioritise work on what actually matters. Perfection can wait.
- People and their feedback comes first. Matters a lot more than data driven decisions. After all, software is used by by and for people, in the first place.
A Bear? Where? Over There — Strapping a giant teddy bear to a car in the name of highway safety
You’re adapting my what? When activated, adaptive cruise control uses forward-looking radar to maintain a specific distance to a vehicle in the lane ahead, slowing down or speeding up (to a maximum of whatever speed cruise control was set to) as necessary. Lane-keeping systems use forward-looking cameras to detect the lane markings on a road to keep the vehicle between them, and when both are active together, the vehicle will do a pretty good facsimile of driving itself, albeit with extremely limited situational awareness.
Which is where the human comes in. Under the SAE’s definitions for automated driving, in Level 2 the car controls braking, acceleration, and deceleration, but the human is responsible for providing situational awareness at all times. Of course, this raises the question of whether the driver is actually paying attention.
To test whether drivers were actually paying attention while using a Level 2 system, IIHS recruited participants and then had them drive for roughly an hour, either using the car’s Level 2 system or not. At three predetermined locations on the test route, a second car—the one with the large pink bear attached to its trunk—would overtake the participant’s vehicle. At the end of the study, the drivers were asked if they saw anything odd, and if so, how many times.
Crazy obsessions like this, are why even I got into writing software. TT2020 is an advanced, open source, hyperrealistic, multilingual typewriter font for a new decade!
From the problem page,
In the second image, there are three ‹N›’s. Yet, they all look exactly the same. A real typewriter can, quite rarely, have one of its letters damaged, or misaligned, such that that letter regularly makes an inferior strike to all the other letters. However, this degree of regularity is impossible; could Underwood or Remington have acheived it, they would have leapt for joy.
While working on the project, incredibly, another bad typewriter scene intruded upon my life. I don’t often sit around and watch movies, so I suppose there are only two possibilities:a. There are so many of these unrealistic typewritten documents in late-2010’s cinema that almost any movie with a typewritten document in it will be hopelessly unrealistic, or
b. The universe, nay, God himself, was urging me on to complete this project in lieu of others I could finish!
Did you know, some planets can generate new atmospheres?
In general, we don’t currently have the technology to image exoplanets unless they’re very large, very young, and a considerable distance from the star they orbit. Yet we can still get some sense of what’s in their atmosphere. To do that, we need to observe a planet that transits across the line of sight between Earth and its star. During a transit, a small percentage of the star’s light will travel through the planet’s atmosphere on its way to Earth, interacting with the molecules present there.
Those molecules leave a signature on the spectrum of light that reaches the Earth. It’s an extremely faint signature, since most of the star’s light never even sees the atmosphere. But by combining the data from a number of days of observation, it’s possible to get this signature to stand out from the noise.
That’s what scientists have done with GJ 1132 b, an exoplanet that orbits a small star about 40 light years from Earth. The planet is roughly Earth’s size and about 1.5 times its mass. It also orbits extremely close to its host star, completing a full orbit in only 1.6 days. That’s close enough to ensure that, despite the small, dim star, GJ 1132 b is extremely hot.
It’s so close and hot, in fact, that the researchers estimate that it’s currently losing about 10,000 kilograms of atmosphere every second. As the host star was expected to be brighter early in its history, the researchers estimate that GJ 1132 b would have lost a reasonable-sized atmosphere within the first 100 million years of its existence. In fact, over the life of the planet, the researchers estimate that it could have lost an atmosphere weighing in at about five times the planet’s current mass—the sort of thing you might see if the remaining planet were the core of a mini-Neptune.
So, researchers were probably surprise to find that, based on data from the Hubble, the planet seems to have an atmosphere.
How’d that get here?
Moderation in Infrastructure – Stratechery by Ben Thompson
It was Patrick Collison, Stripe’s CEO, who pointed out to me that one of the animating principles of early 20th-century Progressivism was guaranteeing freedom of expression from corporations:
Exactly the same kind of restraints upon freedom of thought are bound to occur in every country where economic organization has been carried to the point of practical monopoly. Therefore the safeguarding of liberty in the world which is growing up is far more difficult than it was in the nineteenth century, when free competition was still a reality. Whoever cares about the freedom of the mind must face this situation fully and frankly, realizing the inapplicability of methods which answered well enough while industrialism was in its infancy.
This is why I take Smith’s comments as more of a warning: a commitment to consistency may lead to the lowest common denominator outcome Prince fears, where U.S. social media companies overreach on content, even as competition is squeezed out at the infrastructure level by policies guided by non-U.S. countries. It’s a bad mix, and public clouds in particular would be better off preparing for geographically-distinct policies in the long run, even as they deliver on their commitment to predictability and process in the meantime, with a strong bias towards being hands-off. That will mean some difficult decisions, which is why it’s better to make a commitment to neutrality and due process now.
Excel Never Dies
Excel may be the most influential software ever built. It is a canonical example of Steve Job’s bicycle of the mind, endowing its users with computational superpowers normally reserved for professional software engineers. Armed with those superpowers, users can create fully functional software programs in the form of a humble spreadsheet to solve problems in a seemingly limitless number of domains. These programs often serve as high-fidelity prototypes of domain specific applications just begging to be brought to market in a more polished form.
If you want to see the future of B2B software, look at what Excel users are hacking together in spreadsheets today.
Pair with, These SaaS Companies Are Unbundling Excel – Here’s Why It’s A Massive Opportunity
On The dance between the long tail and the short head
The disconnect occurs when producers and creators try to average things out and dumb things down, hoping for the big hit that won’t come. Or overspend to get there. The opportunity lies in finding a viable audience and matching the project’s focus and budget to the people who truly want it.
And the dance continues.
Ten reasons to write a book (I’d say these translate well to blogging too!)
Here’s four …
It clarifies your thinking.
It’s a project that is completely and totally up to you.
Because it’s a generous way to share.
It will increase your authority in your field.
Brent Simmons on How NetNewsWire Handles Threading
Advice
Some developers I’ve known seem to think that being good at concurrency makes them badass. Others seem to think that senior developers must be great at concurrency, and so they should be too.
But what senior developers are good it is eliminating concurrency as much as possible by developing a simple, easy, consistent model to follow for the app and its components.
And this is because concurrency is too difficult for humans to understand and maintain. Maybe you can create a system that makes extensive use of it, and have it be correct for one day. But think of your team! Even if you’re a solo developer, you and you-plus-six-months makes you a team.
I know you’re worried about blocking the main thread. But consider this: it’s way easier to fix a main-thread-blocker than it is to fix a weird, intermittent bug or crash due to threading.
Laurie Barth on Human-Readable JavaScript
Experts don’t prove themselves by using every piece of the spec; they prove themselves by knowing the spec well enough to deploy syntax judiciously and make well-reasoned decisions. This is how experts become multipliers—how they make new experts.
So what does this mean for those of us who consider ourselves experts or aspiring experts? It means that writing code involves asking yourself a lot of questions. It means considering your developer audience in a real way. The best code you can write is code that accomplishes something complex, but is inherently understood by those who examine your codebase.
And no, it’s not easy. And there often isn’t a clear-cut answer. But it’s something you should consider with every function you write.
Len Kleinrock: The First Two Packets on the Internet
AnandTech Interview with Jim Keller: ‘The Laziest Person at Tesla
First, I was at Digital (DEC) for 15 years, right! Now that was a different career because I was in the mid-range group where we built computers out of ECL - these were refrigerator-sized boxes. I was in the DEC Alpha team where we built little microprocessors, little teeny things, which at the time we thought were huge. These were 300 square millimeters at 50 watts, which blew everybody’s mind.
So I was there for a while, and I went to AMD right during the internet rush, and we did a whole bunch of stuff in a couple of years. We started Opteron, HyperTransport, 2P servers - it was kind of a whirlwind of a place. But I got sucked up or caught up in the enthusiasm of the internet, and I went to SiByte, which got bought by Broadcom, and I was there for four years total. We delivered several generations of products.
I was then at P.A Semi, and we delivered a great product, but they didn’t really want to sell the product for some reason, or they thought they were going to sell it to Apple. I actually went to Apple, and then Apple bought P.A Semi, and then I worked for that team, so you know I was between P.A Semi and Apple. That was seven years, so I don’t really feel like that was jumping around too much.
Then I jumped to AMD I guess, and that was fun for a while. Then I went to Tesla where we delivered Hardware 3 (Tesla Autopilot). So that was kind of phenomenal. From a standing start to driving a car in 18 months - I don’t think that’s ever been done before, and that product shipped really successfully. They built a million of them last year. Tesla and Intel were a different kind of a whirlwind, so you could say I jumped in and jumped out. I sure had a lot of fun.
P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.