<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">

 <title>Jonas Schneider</title>
 <link href="http://jonasschneider.com/atom.xml" rel="self"/>
 <link href="http://jonasschneider.com/"/>
 <updated>2020-09-04T13:19:56+00:00</updated>
 <id>http://jonasschneider.com/</id>
 <author>
   <name>Jonas Schneider</name>
   <email>mail@jonasschneider.com</email>
 </author>

 
 <entry>
   <title>High availability for software installation procedures</title>
   <link href="http://jonasschneider.com/2015/04/high-availability-installation-procedures.html"/>
   <updated>2015-04-27T00:00:00+00:00</updated>
   <id>http://jonasschneider.com/2015/04/high-availability-installation-procedures</id>
   <content type="html">
    
    &lt;p&gt;The best way to vouch for the correctness of a piece of software is to write a
proof that shows it. Few people actually do that, except maybe for the awesome
algorithms and distributed systems communities (e.g
&lt;a href=&quot;https://ramcloud.stanford.edu/raft.pdf&quot;&gt;Raft&lt;/a&gt;). Barring that, the next best
thing is likely an exhaustive test suite.&lt;/p&gt;

&lt;p&gt;However, even given a proof showing the correctness of an algorithm,
implementation bugs can always occur. The likelihood of this happening
increases with implementation complexity. This problem encouraged the original
Raft authors to make “understandability” one of the main design goals of their
algorithm. Their work highlights the importance of incorporating the software
developer and user into the model of implementation correctness assessment. We
extend this idea to the installation instructions for
&lt;a href=&quot;https://github.com/jonasschneider/haven&quot;&gt;Haven&lt;/a&gt;, our software for securely
backing up data storage servers.&lt;/p&gt;

&lt;p&gt;An interesting observation is that it’s hard to show propositions about
software-caused side effects without attaching procedures to the software. For
example, a backup software can only ever provide me with a daily snapshot of
all my data under the assumption that I run it at least once a day, or maybe
even just the assumption that I install the software correctly.&lt;/p&gt;

&lt;p&gt;However, as soon as procedure is required to be manually followed by the user
to achieve correctness, human error becomes an issue. Humans make typos,
forget information, and are lazy and a general nuisance. It becomes an
interesting question, then, how to provide safety under these common types of
operator error.&lt;/p&gt;

&lt;p&gt;Our approach borrows simple concepts from the field of high-availability
engineering; &lt;em&gt;fail-safe&lt;/em&gt; behaviour, &lt;em&gt;positive acknowledgement&lt;/em&gt; and
&lt;em&gt;redundancy&lt;/em&gt; to avoid flaky single points of failure.&lt;/p&gt;

&lt;p&gt;Failing safely means that when encountering seemingly “strange” conditions,
the system should abort and loudly yell at the user instead of trying to “keep
on trucking” and potentially report a false success. In our case, this means
that the backup process should assume that the user misconfigured everything,
didn’t touch the defaults, and isn’t doing manual data integrity checks to
test backups for correctness.&lt;/p&gt;

&lt;p&gt;It also immediately follows that the software has to provide positive “Yes,
the backup worked”-style notifictions to the user, since the absence of
notifications could always also mean that e.g. the host running the backup
software has lost power.&lt;/p&gt;

&lt;p&gt;Finally, by adding redundancy to the procedures given to the user, we can
greatly reduce the chance of an accumulation of user error great enough to
render the backup useless. In our case, we provide them with two completely
separate sets of installation instructions of two very different software
systems. Each system independently attempts to perform the task of backing up
the user’s data. The user is instructed to install multiple systems according
to the installation instructions.&lt;/p&gt;

&lt;p&gt;Therefore, to encur data loss, the user is required to make distinct errors
within &lt;em&gt;every&lt;/em&gt; set of instructions, compared to the usual requirement of just
a single error. This also applies to bugs; for backups to be corrupted, there
have to be bugs in &lt;em&gt;all&lt;/em&gt; of the implementations, compared to just one bug in a
single implementation.&lt;/p&gt;

&lt;p&gt;So far, Haven’s components have been used successfully in production for two
years, backing up a dataset sized around 600GB to two independent cloud
providers. Two disaster recoveries were performed, one revealing troubling
behaviour of Duplicity’s OpenStack backend that caused it to fail to detect
aborted volume uploads when under network congestion.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://github.com/jonasschneider/haven&quot;&gt;Haven source code&lt;/a&gt; is available
on GitHub. Contributions of any kind are wholeheartedly welcome. If you’re using
Haven, &lt;a href=&quot;mailto:mail@jonasschneider.com&quot;&gt;I’d love to hear from you&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Future work should focus on identifying more single points of failure (e.g.
the host kernel, and ZFS) and assessing the expected damage on failure given
their probability of failure. Also, better management of user attention in
case of significant failures or delays should be implemented to achieve a higher
signal-to-noise ratio in user communications. Finally, a system for automatic
verification of the stored data could use a “staged” recovery scenario to
check for a wider variety of failure modes than what is covered by available
tooling. This would uncover issues with verification tools (e.g. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;duplicity
verify&lt;/code&gt;) that assume too much about local state on the system performing the
recovery.&lt;/p&gt;

   </content>
 </entry>
 
 <entry>
   <title>World on a Wire: Are we living inside a simulated reality?</title>
   <link href="http://jonasschneider.com/2015/01/universe-simulation-hypothesis.html"/>
   <updated>2015-01-01T00:00:00+00:00</updated>
   <id>http://jonasschneider.com/2015/01/universe-simulation-hypothesis</id>
   <content type="html">
    
    &lt;style&gt;
p.int-meta { font-style: italic; font-size: 90%; opacity: 0.8; margin-top:30px;}
p.q { font-weight: bold; }
&lt;/style&gt;

&lt;p&gt;It sounds like the plot of a science-fiction movie: the world we live in may not really exist, but might only be simulated by a sophisticated cluster of computers — and we ourselves are part of this gigantic simulation, which encompasses the entire universe as we know it. What has long been only a philosophical thought experiment, is now being investigated by physicists and computer scientists with scientific scrutiny.&lt;/p&gt;

&lt;figure style=&quot;float:right;margin-left:30px&quot;&gt;
&lt;img src=&quot;/images/content/mueller_quade.jpg&quot; /&gt;
&lt;br /&gt;
&lt;figcaption&gt;&lt;a href=&quot;http://crypto.iti.kit.edu/index.php?id=iks-mueller-quade&quot;&gt;Jörn Müller-Quade&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Jörn Müller-Quade, professor of Computer Science at the &lt;a href=&quot;http://www.kit.edu/index.php&quot;&gt;Karlsruhe Institute of Technology&lt;/a&gt; and head of the research group &lt;a href=&quot;http://crypto.iti.kit.edu/index.php?id=iti-crypto&amp;amp;L=2&quot;&gt;Cryptography and IT Security&lt;/a&gt;, agreed to discuss these highly speculative and mind-bending ideas.&lt;/p&gt;

&lt;p class=&quot;q&quot;&gt;Professor, first off: what kind of emotions do these ideas evoke in your mind; the entire universe as a “World on a Wire”, with all of reality controlled by programmers outside of our perception?&lt;/p&gt;

&lt;p class=&quot;a&quot;&gt;This might change over the course of our conversation, but my first reaction to that is somewhat dismissive — I think the laws of nature are much too harmonious und coherent for them to be a simulation, at least in the sense that things are only “faked” to our brains.&lt;/p&gt;

&lt;p class=&quot;a&quot;&gt;You have to differentiate: it could either be a simulation like in a video game, where light reflections and other phenomena are specifically faked to fool the player. Or, it might be a “true” physical simulation, which could in the end be indistinguishable from an actual universe.&lt;/p&gt;

&lt;p class=&quot;q&quot;&gt;Let’s assume the simulation hypothesis is true; we actually live in a simulated world, and don’t know about it. How would a simulated world like this look from the outside? Like a data center? Or maybe more like a giant net of &quot;pods&quot;, like in &lt;i&gt;The Matrix&lt;/i&gt; or &lt;i&gt;Avatar&lt;/i&gt;?&lt;/p&gt;

&lt;p class=&quot;a&quot;&gt;It would probably look like a data center of the future, and how those are going to look, nobody knows for sure.
If we are, in fact, a simulation embedded within a world “above” us, then there’s two options:
either, the technology of the hyperworld might be very similar to ours,
because they are trying to simulate a universe similar to theirs.
In that case, we at least get an impression of how their world looks and behaves.
On the other hand, it could be something entirely different; it’s possible that we are just one iteration of some form of genetic algorithm, where different variants of conditions are tried out until, randomly, something like our current conditions emerge.
If that’s true, then we’re probably running on computation technology that we can’t even imagine;
maybe even with entirely different physical laws.&lt;/p&gt;

&lt;p class=&quot;q&quot;&gt;Empirically, Moore’s Law has correctly predicted the doubling of computational power every 18 months (to simplify its predictions a bit). Already today, entire motion pictures can be rendered digitally, instead of actually filming them, even though this process is still somewhat cumbersome. Skipping ahead 20-30 years, will we have a simulation that is accurate enough to much smaller scales, for example on the level of atoms? Or do we at some point encounter fundamental limits of computation?&lt;/p&gt;

&lt;p class=&quot;a&quot;&gt;One insight that seems philosophical, but is actually founded by Computer Science, is: simulating a sufficiently complex physical process is likely to take as many computational steps as the process itself. What this means is: I don’t think it will be possible, within our limited universe, to accurately simulate another universe at the same scale of ours. However, if we manage to advance computational power, through Moore’s Law or otherwise, it might reasonably be possible to accurately simulate single people or smaller communities of people and study their interactions. The &lt;a href=&quot;https://www.humanbrainproject.eu/&quot;&gt;Human Brain Project&lt;/a&gt;, for example, is trying exactly that: to simulate a human brain on the neuron level. They are trying this even though we haven’t exactly figured out what exactly is happening at the neuron level. However, it's plausible to imagine that the successor of the successor of the Human Brain Project will manage to significantly improve this understanding.
When that happens, we'll have to spend some thought on to what extent such a simulated brain can be considered a sentient organism of its own and how it might perceive pain and suffering.&lt;/p&gt;

&lt;p class=&quot;q&quot;&gt;Going from simulated brains to simulated universes — does the simulation of an organism or part of an organism differ from the simulation of all of physical reality?&lt;/p&gt;

&lt;p class=&quot;a&quot;&gt;I don’t know that for sure, of course.
But my speculation is that conscience and thought activity are created when something sufficiently complex and adhering to the correct rules is simulated.
For that to happen, you might not even have to descend to the physical level of quarks and quantum chromodynamics.
A much simpler model, containing for example quantum information theory, could suffice.
Actually, Penrose even speculates&lt;sup id=&quot;fnref:penrose&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:penrose&quot; class=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; that quantum effects are a necessary precondition to achieve conscience;
this would mean that only quantum computers will allow us to accurately simulate a level of thought activity that we would describe as sentient.&lt;/p&gt;

&lt;p class=&quot;q&quot;&gt;Scientists in Washington and Bonn have investigated&lt;sup id=&quot;fnref:glitches&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:glitches&quot; class=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; &lt;em&gt;how well&lt;/em&gt; a simulation of our universe at a physical level could work, no matter what kind of computer they use.
In particular, they looked for numerical inaccuracies that would look like the “Glitches in the Matrix”
for skeptics to find to detect the simulation.
Sadly, they came to the conclusion that it’s likely easier to “fix” the simulation from the outside than for us,
on the inside, could detect such glitches.
Does that mean that we don’t even stand a chance to prove that we &lt;em&gt;don’t&lt;/em&gt; live in a simulation?&lt;/p&gt;

&lt;p class=&quot;a&quot;&gt;If these glitches are very rare, we would likely always interpret them as measurement errors.
Since our priors are biased against a simulation, we would believe a hypothesis like this only if
we had enormous amounts of reproducible data from a controlled lab setting.
Therefore, the only possibility for us to find out that we live in a simulation
might be that the ones who are simulating us consider us mature enough to actually &lt;i&gt;tell us&lt;/i&gt; directly.&lt;/p&gt;

&lt;p class=&quot;q&quot;&gt;What kind of advice would you give to the creators of our simulation? What should be the goalpost of their work?&lt;/p&gt;

&lt;p class=&quot;a&quot;&gt;I think that the world right now has too much suffering and pain for humans.
If this is really a simulation, I would question whether that is really necessary, or if it can be avoided.
On the other hand, I'd take this as an indicator that our world is not a cruel simulation,
but a harsh reality that we'll have to work on ourselves to make it a better place.&lt;/p&gt;

&lt;p class=&quot;int-meta&quot;&gt;Interview conducted in German and edited by Jonas Schneider for Radio KIT. &lt;a href=&quot;/images/content/universe_simulation.mp3&quot;&gt;Download the original audio (MP3, 06:11).&lt;/a&gt;&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:penrose&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;“Reply to criticism of the ‘Orch OR qubit’ – ‘Orchestrated objective reduction’ is scientifically justified”. Stuart Hameroff and Roger Penrose. Physics of Life Reviews (Elsevier) 11 (1): pp. 94–100, 2013. &lt;a href=&quot;http://quantum.webhost.uits.arizona.edu/prod/sites/default/files/Hameroff,%20Penrose%20-%20Reply%20to%20criticism%20of%20the%20Orch%20OR%20qubit.pdf&quot;&gt;Available online.&lt;/a&gt; &lt;a href=&quot;#fnref:penrose&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:glitches&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;“Constraints on the Universe as a Numerical Simulation”. Silas Beaneand, Davoudi Zohreh, Martin J. Savage. INT-PUB-12-046 (Cornell University Library), 2012. &lt;a href=&quot;http://arxiv.org/pdf/1210.1847v2&quot;&gt;On arXiv.&lt;/a&gt; &lt;a href=&quot;#fnref:glitches&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

   </content>
 </entry>
 
 <entry>
   <title>In the end, software fixes itself</title>
   <link href="http://jonasschneider.com/2013/12/in-the-end-software-fixes-itself.html"/>
   <updated>2013-12-04T00:00:00+00:00</updated>
   <id>http://jonasschneider.com/2013/12/in-the-end-software-fixes-itself</id>
   <content type="html">
    
    &lt;p&gt;I recently purchased a GeForce graphics card from an online retailer. The card, as a promotion, included free copies of “Assassin’s Creed IV”, “Splinter Cell: Blacklist” and “Batman: Arkham Origins”, all three recently released PC games, with a current combined market worth of 180€ (whether or not that price is ridiculous is another question). I’ll happily play those!&lt;/p&gt;

&lt;p&gt;Just how hard can it be for me to actually get these games running?&lt;/p&gt;

&lt;p&gt;If I just downloaded the games from The Pirate Bay, I’d have to mount the ISO image, run the installer, and copy the cracked files over to the game directory. In my perfect dream world, for the free games they’d send me an email together with a download link for the installer. Time taken: download time plus 5 minutes.&lt;/p&gt;

&lt;p&gt;Reality looks a bit different. With the graphics card package, they actually shipped me a physical coupon that had a code written on it. Actually, two coupons. One for Batman and another for the other two. Makes perfect sense.&lt;/p&gt;

&lt;p&gt;The coupons each told me to visit an Nvidia-owned page to redeem them. As I know Batman is sold on Steam, I optimistically tried entering the product-key-looking thing into Steam’s activation box. Aw, no luck. So, on to fill out the nVidia form, which actually wants your date of birth and address (of course I simply entered garbage). Of course, they also subscribe you to your newsletter. Great, a CD key appeared! On to enter it on Steam. Yup, that worked. Thanks, Steam!&lt;/p&gt;

&lt;p&gt;This serves to demonstrate that even stupid policies can be so well-executed that the overall experience doesn’t suffer remarkably. Steam is just that – “bearable”. I’ve never had issues with Steam as an end user, and pragmatism trumps my philosophy here. (Keep in mind that in setting up my Steam account, I of course also had to enter all of my personal details, and I’m forever tied to these games and prohibited to selling them off to anyone else. An issue for another day.)&lt;/p&gt;

&lt;p&gt;Now on to the other two games. In order to get the second code, you have to visit a different Nvidia-owned page that is identical to the other one except for the header text. Enter all your data again, and you get two other codes for the other two games. These look suspiciously short, I thought. Of course, Steam doesn’t accept them. After clicking through three “How to redeem” links on the nVidia page, I realized that those weren’t actually codes to be used with Steam. They were to be used with UPlay.&lt;/p&gt;

&lt;p&gt;Valve launched Steam back in 2003. Starting in 2005, non-Valve games could be purchased on Steam, provided the developer made an agreement with Valve (with Valve obviously taking a cut.) Today, Steam has 65 million users, 3000 games, and 75% of digital PC games sales are made through Steam, according to Wikipedia. Of course, Steam profits greatly from the network effects provided by the enormous user base.&lt;/p&gt;

&lt;p&gt;Now everyone else thinks they have to do the same thing. EA made Origin, which is a copy of Steam. Ubisoft made UPlay, which is a copy of Steam. What’s different is that on Origin, you can only play EA games and on UPlay, you can only play Ubisoft games. How any company executive can be so narrow-minded as to think people would accept this as their go-to gaming platform is beyond me. And when people just use your software because they have to, they won’t share shit on Facebook from it, they won’t read your update news, they won’t take your surveys and the last thing they’ll do is buy more things in your store.&lt;/p&gt;

&lt;p&gt;Fine, fine. I’ll start UPlay. I even have it installed! “Updating UPlay.” A minute goes by. “Your account details”? You can’t even remember my email address from the last time I signed in from this computer? After trying several email addresses, I finally get in.&lt;/p&gt;

&lt;p&gt;Now let’s enter that code. Hm. “Invalid code.” That’s strange.. oh, great, this is actually not a UPlay code, but a promotion code for the UPlay store. That’s .. reasonable. Open the UPlay store, sign in again, confirm your UPlay profile (else you can’t access the store). Then click on the two links buried in the Nvidia guide to add the two games to your “Shopping Cart”. Then “Go to Checkout” on the UPlay Store. Enter address again. Confirm payment details. For no payment. Yes. Yes. Confirm. Finalize.&lt;/p&gt;

&lt;p&gt;Now, a little notice pops up on the Checkout page. It doesn’t look like an error, so I ignored it the first few times. When I actually read it, I started to cry.&lt;/p&gt;

&lt;p&gt;“Some of your items are only available for purchase from 11pm to 6am due to child protection ratings. Between 6am and 11pm, you can’t purchase these items. However, you can keep them in your Shopping Cart and check out later.”&lt;/p&gt;

&lt;p&gt;What the actual fuck is that.&lt;/p&gt;

&lt;p&gt;I don’t think this needs any further explanation. With about 20 browser tabs open, I managed to to redeem one of my three games. This took 20 minutes of my time. For now.&lt;/p&gt;

&lt;p&gt;Now, who can I blame here? In my rage, I blame everyone. Alternate (the online retail store) and Nvidia for failing to provide even a vaguely reasonable procedure and interface for redeeming your coupon. Ubisoft &amp;amp; EA for not accepting that other people make games, too, when designing their horrendous user interfaces for UPlay and Origin, respectively. German politics for imposing “opening hours” on the internet. And myself for not torrenting the games in the first place.&lt;/p&gt;

&lt;p&gt;As I see it, all these issues can be fixed by applying basic reasoning, human intellect, and maybe a bit of non-crappy software.&lt;/p&gt;

&lt;p&gt;PC gamers are a strange lot; most of them don’t really know what actually happens inside a computer, but are great at comparing benchmarks and spec sheets and picking out the best components for their machines. They also know their way around a file system for modding, patching, and cracking their games. Though mostly, they don’t care about any of the extremes; they neither care about the ease-of-use that end users require, but they also don’t care about the philosophical implications of DRM that lie at the Richard Stallman end of the spectrum. For them, it’s all about the game.&lt;/p&gt;

&lt;p&gt;The good part is that DRM will lose in the end. Activation, serials, CD keys, game launchers, overlays, all that crap will go down the drain eventually, just like CD protection rootkits failed. It’s simply a consequence of data being digital; as soon as a single computer-versed person on the globe breaks the encryption/protection/whatever, they can just share the pure version with everyone else. (Of course, the same applies to information in general. And that’s why it’s a great time to be alive!)&lt;/p&gt;

&lt;p&gt;Needless to say, I now downloaded Splinter Cell and Assassin’s Creed from The Pirate Bay. Off to playing now.&lt;/p&gt;

   </content>
 </entry>
 
 <entry>
   <title>Introducing the HTTP Transport Layer</title>
   <link href="http://jonasschneider.com/2013/06/http-transport-layer.html"/>
   <updated>2013-06-01T00:00:00+00:00</updated>
   <id>http://jonasschneider.com/2013/06/http-transport-layer</id>
   <content type="html">
    
    &lt;p&gt;&lt;em&gt;Warning: this post is not about technical details. Expect high doses of abstraction, nitpickery, and wanna-be-philosophy.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The web runs on HTTP. It doesn’t run on SOAP, XMLRPC, WS-*, IPsec, and other friends from the grave, but on the shoulders of a protocol that is easy for humans to read and hard (even in 2013) for machines to parse.&lt;/p&gt;

&lt;p&gt;At its core, though, HTTP is a transport-layer protocol: request something with a path, a method and a bunch of other headers (and maybe a blob), and you get back another bunch of headers plus a blob. This is what I call the &lt;strong&gt;HTTP Transport Layer.&lt;/strong&gt; Resource-oriented application concepts like REST are layered above this transport layer, which is basically a specialized RPC scheme.&lt;/p&gt;

&lt;p&gt;The most common implementation of the HTTP Transport Layer used today is still HTTP/1.1, but there are already competing alternatives such as SPDY (the starting point for upcoming HTTP/2.0 work), and rising demand for improved security, privacy, resistance and performance of networking are bound to lead to further advances and new network protocols.&lt;/p&gt;

&lt;p&gt;The implications are interesting: while it could be argued that SPDY and the like should use its own URI scheme, that would not make a lot of semantic sense; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;spdy://google.com&lt;/code&gt; is still semantically the same resource as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://google.com&lt;/code&gt;. Contrast this with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ftp://google.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The difference boils down to the ambiguous definition of a Uniform Resource Identifier (URI), which can be both a &lt;em&gt;Name&lt;/em&gt; and a &lt;em&gt;Locator&lt;/em&gt;. Basically, it’s a name if it identifies (erm..) what I’m looking at, and it’s a locator if it gives me a clear indicator of how I would go ahead and fetch the thing. For example, a post in your application could have a URL of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://myblog.com/posts/1&lt;/code&gt; (which others know how to resolve) and have a URN of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;myblog-post:1&lt;/code&gt;. The un-intuitiveness of the second example already shows how pervasive the “locator” role is in reality.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http:&lt;/code&gt; URI scheme used to have purely locational character: if I wanted to fetch &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://google.com&lt;/code&gt;, I would take that as an order to establish a TCP connection to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;google.com:80&lt;/code&gt;, and talk HTTP/1.1 over that connection.&lt;/p&gt;

&lt;p&gt;But now I want to go to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://google.com&lt;/code&gt;. Again, my course of action is determined by the URI scheme: the spec says, establish a TCP connection to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;google.com:443&lt;/code&gt;, do the TLS dance and then talk HTTP/1.1 again.&lt;/p&gt;

&lt;p&gt;It can be argued that this is a clear indicator that in the examples above, the URI is 100% a locator; even though I am fetching the same resource (“Google’s front page”) twice, there are two different things that I work with. But, as I see it, there is a little bit of semantic meaning: The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https&lt;/code&gt; resource is actually “Google’s front page that is verified to be sent by Google”. It becomes clear that there is a breach of layers here; this resource by definition interferes with the transport layer, because what “verified to be sent by Google” actually means depends on the way I fetched the resource in the first place (and the verification policy, of course, which is a big topic in itself.)&lt;/p&gt;

&lt;p&gt;What I am proposing is to &lt;strong&gt;start treating &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http:&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https:&lt;/code&gt; URIs as names&lt;/strong&gt;. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://google.com&lt;/code&gt; would then be defined as “a resource I can fetch from google.com at path &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/&lt;/code&gt;, using some implementation of the HTTP Transport Layer that both &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;google.com&lt;/code&gt; and I know.” Note that the mechanism used is not defined, and everybody has to ensure to degrade gracefully if the other end doesn’t support their favorite transport.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;real-world implication&lt;/strong&gt; is this: if the thing you’re developing provides an infrastructure for creating what I above described as the “HTTP Transport Layer” (and you’re not working within HTTP/1.1), then make sure you design it well enough so that clients can seamlessly upgrade to your transport layer from whatever they are currently using, which at the time of writing is most likely HTTP/1.1.&lt;/p&gt;

&lt;p&gt;To be more specific, that means determining whether your transport is supported by a site should not cause a performance loss. HTTPS itself is a bad example there: Check whether &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;google.com:443&lt;/code&gt; responds, if not: fall back to HTTP/1.1. It’s so bad that, in fact, nobody even tries. I personally like the way Google did it with SPDY; adding next protocol negotiation to TLS is a step in the right direction, away from magic port numbers and to mutual, well, negotiation of the protocol to be used next. Transport layers built on top of TLS (which is definitely not the worst spot to be in) should use NPN to provide a clear discovery and upgrade path for supporting clients and servers.&lt;/p&gt;

&lt;p&gt;Transitioning HTTP away from the role of a concrete protocol, to the role of an abstract transport layer, will open up new possibilities for the web, and will play a key role for the deployment and acceptance of new technology that will make web browsing, communication, and distributed computing more efficient, secure, and generally awesome.&lt;/p&gt;

   </content>
 </entry>
 
 <entry>
   <title>My thoughts on academia, two months in</title>
   <link href="http://jonasschneider.com/2012/12/my-thoughts-on-academia-two-months-in.html"/>
   <updated>2012-12-10T00:00:00+00:00</updated>
   <id>http://jonasschneider.com/2012/12/my-thoughts-on-academia-two-months-in</id>
   <content type="html">
    
    &lt;p&gt;Performing Arts and related disclipines have their celebrities. Their visionaires.
Everybody who dreams of, one day, becoming a famous actor or musician, has their source of inspiration.
We strive to learn from the greatest, to one day be one of them ourselves.
This thought is embodied within the image of celebrities within our society. The famous have chosen their path, and even though their goal may not be the perfect one for everybody, it’s the will and commitment that matters.&lt;/p&gt;

&lt;p&gt;These celebrities are the “stars in the sky” for the young people starting out their studies within the fields of Performing Arts. Without anyone to look up to and admire, the feeling of being overwhelmed by the subject of study would drive even the most dedicated students away from pursuing their dream. Even though they may be from another country or continent and completely unreachable, having an idol justifies the insane time and talent investments.
For the aspiring artists, there is the general notion to “suck it up”, keep at it, and concentrate on smaller, accomplishable steps.&lt;/p&gt;

&lt;p&gt;This perception always struck me as melodramatic. I used to blame it on the subject — after all, why study Drama if you don’t like yourself some of it.&lt;sup id=&quot;fnref:drama&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:drama&quot; class=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; Why would I need to be the loser for the first part of my life, and then suddenly (or progressively) become more successful, creative, and inspiring? 
I can see the purely psychological and pragmatic reasons explaining the ‘success’ of this line of thought. Weeding out uncommited students. Focusing the attention of the freshmen on smaller tasks that eventually lead to grasping a whole field. “Learning to learn.”&lt;sup id=&quot;fnref:algebra&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:algebra&quot; class=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; Clear goals and clear visions are just that, goals and visions. They are not clear instructions, and clear instructions you probably need in order to advance.&lt;/p&gt;

&lt;p&gt;And still, I disagree with this concept of education.
I should be able to decide for myself how, when, and on which grounds I want to study what. Of course higher education is completely optional. Still, this is not a case of “you just don’t yet know that you want it”, it’s a major investment of human time and talent.
When starting out, you certainly get to know that, apparently, you don’t know nothing yet. First, you study the basics, then diverge into a vast array of topics, and maybe eventually converge again on a very specific area of research.&lt;/p&gt;

&lt;p&gt;That’s exactly how I imagined studying a scientific disclipine to be.&lt;/p&gt;

&lt;p&gt;But now, two months into my first semester of studying Computer Science at the &lt;a href=&quot;http://kit.edu&quot;&gt;Karlsruhe Institute of Technology&lt;/a&gt;, it seems to me that actually, the ‘arts’ and ‘science’ schemes of education are actually not that different from each other. Within the first two months, around a third of freshmen have already dropped out. We study linear algebra and mathematical analysis. The amount of time spent on working on more closely CS-related subjects is not even worth a footnote compared to the hours upon hours spent every week to solve the math problems.  Albeit very interesting, their sole purpose seems to be to sort out the people who aren’t willing to invest that much of their time.&lt;sup id=&quot;fnref:mathdisclaimer&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:mathdisclaimer&quot; class=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;It’s totally common for students of all skill levels to go to the math lectures and don’t understand a word of what the professor is saying or doing. When asking higher-grad students whether this is normal, everybody replies, “Yup, just be sure to go to the tutorials.” I don’t think that’s education. It’s presentation. Maybe you get some of it, maybe you don’t.&lt;/p&gt;

&lt;p&gt;Of course, there is also a positive side to this situation. Similarily to the concepts described above, forming a notion of “us students versus the evil content/professors/world” leads to a strengthened bond between the students.
The importance of celebrities is minuscule compared to their equivalents in popular culture.
Still, dedication is usually caused not by the content itself, but by the tiny possibility of one day having as large a scientific impact as Alan Turing, or satisfying one’s scientific curiosity regarding the inner works of the universe, or just making a truckload of money at a software shop. It’s everyone’s guess.&lt;/p&gt;

&lt;p&gt;In retrospect, all of this may seem glaringly obvious.
But I think it’s like seeing somebody sitting at the curbside in the rain. You instantly know: Oh wow, that guy is probably sad.&lt;/p&gt;

&lt;p&gt;You just don’t know what it &lt;em&gt;really&lt;/em&gt; feels like until you’ve been there in the rain for yourself.&lt;/p&gt;

&lt;p class=&quot;epilogue&quot;&gt;
 I am very interested to learn whether and how my perception of education or my attitude towards either of the described education schemes will change during my time at university.
 &lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:drama&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I presume everybody has a certain desire for drama. What differs is merely the magnitude. &lt;a href=&quot;#fnref:drama&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:algebra&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;http://roshfu.com/dont-learn-algebra&quot;&gt;Don’t Learn Algebra&lt;/a&gt; by Roshan Choxi is a nice read on the topic. &lt;a href=&quot;#fnref:algebra&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:mathdisclaimer&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;That’s not to say I don’t think what we learn now is going to be irrelevant for the later semesters. &lt;a href=&quot;#fnref:mathdisclaimer&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

   </content>
 </entry>
 
 <entry>
   <title>The love triangle of programming languages</title>
   <link href="http://jonasschneider.com/2012/11/the-love-triangle-of-programming-languages.html"/>
   <updated>2012-11-27T00:00:00+00:00</updated>
   <id>http://jonasschneider.com/2012/11/the-love-triangle-of-programming-languages</id>
   <content type="html">
    
    &lt;p&gt;Programmers love to hate on programming languages. It feels great to pitchfork PHP, ruin Ruby or pick on Python. &lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; Here’s &lt;a href=&quot;http://wiki.theory.org/YourLanguageSucks&quot;&gt;a great resource to get you started&lt;/a&gt;.
But also, some programmers love to hate programmers who hate on programming languages. Phrases like “It’s not the tools that matter”, “Great craftsmen can make great stuff with any tool” quickly arise and stifle the discussion about the pros and cons of any language.&lt;/p&gt;

&lt;p&gt;Personally, I think discussing the motivations behind programming language design is very important to help us shape the environment for the developers of tomorrow. I have developed a little scheme for myself to categorize the strengths and weaknesses of programming languages, and I find it quite useful when debating (usually with myself, though.)&lt;/p&gt;

&lt;p&gt;In the “Love Triangle of Programming Languages”, there are three parties that hold stakes when it comes to the design of a language. First, there’s the &lt;strong&gt;user&lt;/strong&gt;. They want to build stuff, obviously. Doing anything should be &lt;em&gt;easy&lt;/em&gt;, and intuitive. Then there’s the &lt;strong&gt;machine&lt;/strong&gt;. Well, yeah, after all, somebody has to do the dirty work. Instructions should be clear, &lt;em&gt;simple&lt;/em&gt; and unambiguous. Finally, there’s the &lt;strong&gt;problem&lt;/strong&gt;. This is where it gets a bit hairy, because problems don’t usually have needs and opinions. What I mean, however, is the concept that should be implemented, be it an algorithm or something more abstract. Problems, erm, want to be represented in a manner that allows easy destructuring of &lt;em&gt;complex&lt;/em&gt; processes.&lt;/p&gt;

&lt;p&gt;These opposing positions can be represented by a triangle.&lt;/p&gt;

&lt;svg width=&quot;100%&quot; height=&quot;15em&quot; viewBox=&quot;-40 -40 640 440&quot; style=&quot;margin: 2em 0&quot;&gt;
  &lt;polygon points=&quot;300,30 570,370 30,370&quot; fill=&quot;none&quot; style=&quot;stroke:#333;stroke-width:3&quot; /&gt;
  &lt;text x=&quot;300&quot; y=&quot;10&quot; style=&quot;text-anchor:middle;font-size:200%&quot;&gt;User&lt;/text&gt;
  &lt;text x=&quot;590&quot; y=&quot;380&quot; style=&quot;text-anchor:start;font-size:200%&quot;&gt;Problem&lt;/text&gt;
  &lt;text x=&quot;10&quot; y=&quot;380&quot; style=&quot;text-anchor:end;font-size:200%&quot;&gt;Machine&lt;/text&gt;
&lt;/svg&gt;

&lt;p&gt;A (programming) language represents a point on this figure. The distance to each of the corners represents the level of abstraction or the amount of obstacles between the language and the party in the corner.&lt;/p&gt;

&lt;p&gt;There are some observations to be made merely from this geometric shape.
First of all, as you get closer to one corner, you get away from the others.This also means that there is no point that is very close to all the corners.
The center point, which might be intuitively considered the ‘ideal’ point, is actually not that close to &lt;em&gt;any&lt;/em&gt; of the corners.&lt;/p&gt;

&lt;p&gt;Now think of every point as a language. Interpreting the observations leads to interesting results.
First, as language grows closer to one of machine, user, and problem, it inevitably gets farther away from the others.
Resulting from this, there is no ‘perfect language’. There are tradeoffs and decisions to be made at every step of language design that influence the distance to either party.
Also, a balanced language does not excel in closeness to any of the parties.&lt;/p&gt;

&lt;p&gt;The weight of each of the parties can be adjusted with regard to the task at hand.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Put less weight towards the problem. Many tasks are algorithmically or computationally trivial, i.e. I/O-intensive operations.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Put less weight towards the user. If the solution is deemed to be a one-off quick fix, code that looks obfuscated might be acceptable. (Whether or not this is a good idea is another question.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Put less weight towards the machine. If the solution is algorithmically complex, has many interactions with other software, or is not likely to be final, worse performance may be a good tradeoff.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let’s start actually putting some languages on the map. Disclaimer: this probably is where you start disagreeing with me, if you haven’t already. The criteria are not set in stone, and “closeness to the user” is subjective anyway.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;svg width=&quot;100%&quot; height=&quot;15em&quot; viewBox=&quot;-40 -40 640 440&quot; style=&quot;margin: 2em 0&quot;&gt;
  &lt;g opacity=&quot;0.4&quot;&gt;
    &lt;polygon points=&quot;300,30 570,370 30,370&quot; fill=&quot;none&quot; style=&quot;stroke:#333;stroke-width:3&quot; /&gt;
    &lt;text x=&quot;300&quot; y=&quot;10&quot; style=&quot;text-anchor:middle;font-size:200%&quot;&gt;User&lt;/text&gt;
    &lt;text x=&quot;590&quot; y=&quot;380&quot; style=&quot;text-anchor:start;font-size:200%&quot;&gt;Problem&lt;/text&gt;
    &lt;text x=&quot;10&quot; y=&quot;380&quot; style=&quot;text-anchor:end;font-size:200%&quot;&gt;Machine&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(30, 370)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;Assembler&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(85, 330)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;C&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(-50, 310)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;BASIC&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(20, 250)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;RPG&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(170, 290)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;C++, Go&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(300, 30)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;English&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(340, 80)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;Ruby&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(310, 130)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;Python&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(250, 180)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;JS, Lua&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(280, 245)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;Java, PHP, Rust&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(570, 370)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;28&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;Math notation&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(390, 370)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;Haskell&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(280, 370)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;Erlang&lt;/text&gt;
  &lt;/g&gt;
  &lt;g transform=&quot;translate(370, 300)&quot;&gt;
    &lt;circle cx=&quot;0&quot; cy=&quot;0&quot; r=&quot;15&quot; fill=&quot;red&quot; /&gt;
    &lt;text x=&quot;16&quot; y=&quot;23&quot; fill=&quot;red&quot; style=&quot;text-anchor:start;font-size:130%&quot;&gt;Lisp&lt;/text&gt;
  &lt;/g&gt;
&lt;/svg&gt;

&lt;p&gt;With this choice of positioning, many general assumptions made about languages are fulfilled: Assembler is as close to the metal as it gets, but it’s hard to work with and modeling complex problems becomes a pain. Ruby and Python are easy to learn, can ‘solve problems’ quite well, but are slow because they abstract at lot from the machine. Even though math works best, Haskell can model complex algorithmic problems well and do so efficiently, but at the cost of intuitiveness for the author. Finally, liberal, balanced languages like Java have ‘a bit of everything’, but don’t orientate strongly in any direction. &lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;In the past, program and programming language design was heavily influenced by the restricted performance of the computers of the time. A program that finishes before the universe collapses is better than one that doesn’t but reads like a poem, right?&lt;/p&gt;

&lt;p&gt;I claim that the machine is going to have less and less of a central role for the design of the programming langugages that shape the future.&lt;/p&gt;

&lt;p&gt;Now, with this statement I don’t want to insult everyone coding C, C++ or Java today. &lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;
Of course we can’t do computing without the computer. Every higher-level language needs to be translated (compiled or interpreted) to lower-level code eventually. But also, we can’t do meaningful computing without making it as easy as possible for humans to access, grasp and modify our programs. And in the future, this will become more important. As everything around us starts to become connected, the complexity of the interactions will rise faster than the complexity of the computations within a single unit.&lt;/p&gt;

&lt;p&gt;In conclusion, no, it’s still not the tools that matter the most. It’s the one using them. But that doesn’t mean that carefully choosing the right tool won’t make your day a heck of a lot easier.&lt;/p&gt;

&lt;p&gt;Oh, and trying to put &lt;a href=&quot;https://en.wikipedia.org/wiki/Brainfuck&quot;&gt;Brainfuck&lt;/a&gt; onto that triangle nearly made my head explode.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Or to bash BASIC, or to giggle at Go. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Between every two languages, there are probably hundreds more, unpictured. Also, English is used merely as a representative for every spoken language. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;And the last one, “Languages outside the triangle are probably shitty.” &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Well, actually, maybe the Java guys. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

   </content>
 </entry>
 
 <entry>
   <title>The future of email storage</title>
   <link href="http://jonasschneider.com/2012/08/the-future-of-email-in-the-cloud.html"/>
   <updated>2012-08-07T00:00:00+00:00</updated>
   <id>http://jonasschneider.com/2012/08/the-future-of-email-in-the-cloud</id>
   <content type="html">
    
    &lt;p&gt;In the early days of the internet, there were only few servers. When users wanted to exchange messages, they would log on to their server via a terminal physically connected to the server. Then they would send their mail. If a message is directed to someone on another server, the sender’s server would then take care of transferring the contents of the mail to the other server (in plain-text, over SMTP), where it would get sorted into the recipients’ mailbox.&lt;/p&gt;

&lt;p&gt;As personal computers started to become more widespread, people wanted to also send mail from their PCs. This split the process into three parts;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;get the message from the sender’s PC to the sender’s mail server&lt;/li&gt;
  &lt;li&gt;get the message from the senders’ mail server to the recipient’s mail server&lt;/li&gt;
  &lt;li&gt;get the message from the recipient’s mail server to the recipient’s PC&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Steps one and three usually happened in internal organization or company networks, while the mail servers of two organizations talked over the internet.&lt;/p&gt;

&lt;p&gt;The internal communication usually consisted of authenticating the user, so you could not send mail as somebody else or read others’ received mail.&lt;/p&gt;

&lt;p&gt;The communication between the servers wasn’t secure at all - let’s hope the server at mail.recipientcompany.com is really who he claims to be.&lt;/p&gt;

&lt;p&gt;Skip a few years.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://www.radicati.com/wp/wp-content/uploads/2012/04/Email-Statistics-Report-2012-2016-Executive-Summary.pdf&quot;&gt;A market research study&lt;/a&gt; reports 3.3 billion active email accounts for 2012. Google, Microsoft and Yahoo alone account for over one billion of them. With the decentralized nature of the email system and the simplicity of the involved network protocols, how did it come to that?&lt;/p&gt;

&lt;p&gt;Since the authenticity of mails couldn’t always be guaranteed, and you really never knew who you were actually talking to, mail servers began to ‘distrust’ each other. A strange and convoluted combination of spam-flagging, transparent or silent mail rejection, grey-listing and reputation scoring ensued. Message-signing systems such as DomainKeys Identified Messages (DKIM) have been developed to provide sending mail servers methods to prove the integrity of their mails, but sadly, they gain no widesprea use.&lt;/p&gt;

&lt;p&gt;If you were one of the big sender companies already, you’re in luck: you’re so trustable, every mail sent gets through. But if you were not, you’ll have to beg, tinker, and pray to get your mail into the recipient’s inbox, even though you are just peacefully sending a few messages from your own domain. The whole sending process becomes so painful, application developers often decide to just setup a Google account for the app and let all sent mail go through Google’s servers in order to get the trusty “Don’t be evil” stamp.&lt;/p&gt;

&lt;p&gt;Actually, I recently tried to send email ‘the old-fashioned way’ from my home
cable connection - by using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;netcat&lt;/code&gt; to connect to the receiving SMTP
server, acting on behalf of a DynDNS domain I set up.
And it really is like that: nobody will accept your messages. Either with statements guiding to you policies (which then basically say “you’re screwed”) or by just silently dropping your messages after an unsuspicious &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;250 OK&lt;/code&gt; SMTP reply.&lt;/p&gt;

&lt;p&gt;Okay, great, let’s stick to the trusted mail senders, then. But, you say, what about my privacy?
In 1991, Phil Zimmermann wrote Pretty Good Privacy, known as PGP, providing paranoia-homed cypherpunks (yes, I’m playing devil’s advocate here) with the ability to encrypt their communication.
While mathematically perfect, the system lacked another core ingredient: a good user experience. Before widespread client-side adoption, you had to manually copy the Base64-encoded blob of text out of your mail client and decrypt it before being able to see the message, and in any case, you still have to have your ultimately-secret private key on hand all the time every time you want to read your mail.&lt;/p&gt;

&lt;p&gt;Let’s head back to 2012. We are here today with our iPhones and iPads and S3s and Laptops and whatever 3G-enabled devices (or glasses, or fridges) are going to come up during the next few years, and we still want to read our mail. It’s so easy, they say, just route every single of your mails through our system, then you can easily read it on every web-enabled device. It just works! Spotted the catch yet?&lt;/p&gt;

&lt;p&gt;Combining the ease of use offered by cloud-based systems with strong cryptography doesn’t work right now. Reading PGP-encrypted mail on mobile devices, if possible at all, is a painful experience. Yes, security always comes at the price of convenience, but email has gained such utmost importance for almost everyone that it’s ridiculous to argue that the present state is quite good enough in terms of &lt;em&gt;either&lt;/em&gt; security or convenience.&lt;/p&gt;

&lt;h2 id=&quot;so-whats-the-point&quot;&gt;So, what’s the point?&lt;/h2&gt;

&lt;p&gt;From a high-level view, this is what I would expect from a modern email infrastructure:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;worldwide 1-click access to email from authorized mobile devices&lt;/li&gt;
  &lt;li&gt;no storage of private keys within those devices - their loss would be catastrophic&lt;/li&gt;
  &lt;li&gt;perfect encryption from sender’s keyboard to receiver’s eyes&lt;/li&gt;
  &lt;li&gt;which also leads to: no single company should have access to all of my messages&lt;/li&gt;
  &lt;li&gt;“batteries included” set-up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Technical aspects would include full backwards compatibility with existing email services, registration of authorized devices (and revocation in case of loss), automatic encrypted backups, and near-real-time performance. By the way, the only exception regarding the No-Dynamic-IPs policy is Google. They have really gotten it right. If you try to send your mail 70s-style like I did above, without any authentication or signing, it will arrive but be flagged as spam. If you sign your message using SPF and DKIM (which is really doable - it took me about four seconds to manually sign a message using the Ruby DKIM gem), it will actually arrive as a legit message in the recipient’s Gmail account.&lt;/p&gt;

&lt;p&gt;Current email providers all fail at least some of these requirements, as do old-fashioned PGP-based write-your-key-fingerprint-on-a-napkin encryption models. There needs to be a secure location to store the email data, serving it to authorized clients over the net, all using strong cryptography on trusted hardware in a trusted environment.&lt;/p&gt;

&lt;p&gt;In my vision of the email future, the cloud’s coming home. Literally.&lt;/p&gt;

   </content>
 </entry>
 

</feed>
