Planet Crustaceans

This is a Planet instance for lobste.rs community feeds. To add/update an entry or otherwise improve things, fork this repo.

August 19, 2019

Pete Corey (petecorey)

Animating a Canvas with React Hooks August 19, 2019 12:00 AM

A recent React-based client project of mine required an HTML5 canvas animation. Already knee-deep in the new React hooks API, I decided to forgo the “traditional” (can something be “traditional” after just five years?) technique of using componentDidMount and componentWillUnmount in a class-based component, and try my hand at rendering and animating a canvas using React’s new useEffect hook.

Let’s dig into it!


Let’s set the scene by creating a new React component that we want to add our canvas to. We’ll assume that we’re trying to render a circle, so we’ll call our new component, Circle:


import React from 'react';

const Circle = () => {
    return (
        <canvas
            style={{ width: '100px', height: '100px' }}
        />
    );
};

export default Circle;

So far so good.


Our Circle component renders a canvas onto the page, but we have no way of interacting with it. Typically, to interact with an element of the DOM from within a React component, you need a “ref”. The new React hooks API gives us a convenient way to create and use refs:


 import React from 'react';
+import { useRef } from 'react';

 const Circle = () => {
+    let ref = useRef();
     return (
         <canvas
+            ref={ref} 
             style={{ width: '100px', height: '100px' }}
         />
     );
 };

 export default Circle;

Now ref.current holds a reference to our canvas DOM node.


Interacting with our canvas produces “side effects”, which isn’t allowed from within the render phase of a React component. Thankfully, the new hooks API gives us a simple way to introduce side effects into our components via the useEffect hook.


 import React from 'react';
+import { useEffect } from 'react';
 import { useRef } from 'react';
 
 const Circle = () => {
     let ref = useRef();
     
+    useEffect(() => {
+        let canvas = ref.current;
+        let context = canvas.getContext('2d');
+        context.beginPath();
+        context.arc(50, 50, 50, 0, 2 * Math.PI);
+        context.fill();
+    });
     
     return (
         <canvas
             ref={ref} 
             style={{ width: '100px', height: '100px' }}
         />
     );
 };
 
 export default Circle;

Our useEffect callback is free to interact directly with our canvas living in the DOM. In this case, we’re drawing a circle to our canvas.


Unfortunately, our circle may look a little fuzzy or distorted, depending on the pixel density of the screen it’s being viewed on. Let’s fix that by adjusting the scaling of our canvas:


 import React from 'react';
 import { useEffect } from 'react';
 import { useRef } from 'react';
 
+const getPixelRatio = context => {
+    var backingStore =
+    context.backingStorePixelRatio ||
+    context.webkitBackingStorePixelRatio ||
+    context.mozBackingStorePixelRatio ||
+    context.msBackingStorePixelRatio ||
+    context.oBackingStorePixelRatio ||
+    context.backingStorePixelRatio ||
+    1;
+    
+    return (window.devicePixelRatio || 1) / backingStore;
+};
 
 const Circle = () => {
     let ref = useRef();
     
     useEffect(() => {
         let canvas = ref.current;
         let context = canvas.getContext('2d');
         
+        let ratio = getPixelRatio(context);
+        let width = getComputedStyle(canvas)
+            .getPropertyValue('width')
+            .slice(0, -2);
+        let height = getComputedStyle(canvas)
+            .getPropertyValue('height')
+            .slice(0, -2);
         
+        canvas.width = width * ratio;
+        canvas.height = height * ratio;
+        canvas.style.width = `${width}px`;
+        canvas.style.height = `${height}px`;
         
         context.beginPath();
         context.arc(
+            canvas.width / 2,
+            canvas.height / 2,
+            canvas.width / 2,
             0,
             2 * Math.PI
         );
         context.fill();
     });
     
     return (
         <canvas
             ref={ref} 
             style={{ width: '100px', height: '100px' }}
         />
     );
 };
 
 export default Circle;

And with that, our circle should be crystal clear.


Now let’s introduce some animation. The standard way of animating an HTML5 canvas is using the requestAnimationFrame function to repeatedly call a function that renders our scene. Before we do that, we need to refactor our circle drawing code into a render function:


 useEffect(() => {
     ...
+    const render = () => {
         context.beginPath();
         context.arc(
             canvas.width / 2,
             canvas.height / 2,
             canvas.width / 2,
             0,
             2 * Math.PI
         );
         context.fill();
+    };
     
+    render();
 });

Now that we have a render function, we can instruct the browser to recursively call it whenever its appropriate to render another frame:


 const render = () => {
     context.beginPath();
     context.arc(
         canvas.width / 2,
         canvas.height / 2,
         canvas.width / 2,
         0,
         2 * Math.PI
     );
     context.fill();
+    requestAnimationFrame(render);
 };

This works, but there are two problems. First, if our component unmounts after our call to requestAnimationFrame, but before our render function is called, it can lead to problems. We should really cancel any pending animiation frame requests any time our component unmounts. Thankfully, requestAnimationFrame returns a request identifier that can be passed to cancelAnimationFrame to cancel our request.

The useEffect hook optionally expects a function to be returned by our callback. This function will be called to handle any cleanup required by our effect. Let’s refactor our animation loop to properly clean up after itself:


 useEffect(() => {
     ...
+    let requestId;
     const render = () => {
         context.beginPath();
         context.arc(
             canvas.width / 2,
             canvas.height / 2,
             canvas.width / 2,
             0,
             2 * Math.PI
         );
         context.fill();
+        requestId = requestAnimationFrame(render);
     };
     
     render();
     
+    return () => {
+        cancelAnimationFrame(requestId);
+    };
 });

Perfect.


Our second problem is that our render function isn’t doing anything. We have no visual indicator that our animation is actually happening!

Let’s change that and have some fun with our circle:


 let requestId,
+    i = 0;
 const render = () => {
     context.clearRect(0, 0, canvas.width, canvas.height);
     context.beginPath();
     context.arc(
         canvas.width / 2,
         canvas.height / 2,
+        (canvas.width / 2) * Math.abs(Math.cos(i)),
         0,
         2 * Math.PI
     );
     context.fill();
+    i += 0.05;
     requestId = requestAnimationFrame(render);
 };

Beautiful. Now we can clearly see that our animation is running in full swing. This was obviously a fairly contrived example, but hopefully it serves as a helpful recipe for you in the future.

#root { margin: 4em auto; } #root canvas { height: 100%!important; width: 100%!important; }

August 18, 2019

David Wilson (dw)

Mitogen v0.2.8 released August 18, 2019 08:45 PM

Mitogen for Ansible v0.2.8 has been released. This version (finally) supports Ansible 2.8, comes with a supercharged replacement fetch module, and includes roughly 85% of what is needed to implemement fully asynchronous connect.

As usual a huge slew of fixes are included. This is a bumper release, running to over 20k lines of diff. Get it while it's hot, and as always, bug reports are welcome!

Zac Brown (zacbrown)

Using Opusmodus with QuickLisp August 18, 2019 07:00 AM

In my scant free time, I have begun playing with Common Lisp again. It’s been 10-15 years since I last spent any time with it. I recently discovered a program called Opusmodus that uses Common Lisp at its core. It provides a music composition interface using Common Lisp to enable composers to programmatically score their compositions.

Opusmodus wasn’t explicitly designed for programmers though there are a number of programmers that use it from what I can tell. If you have written any Common Lisp in recent history, QuickLisp is a natural tool to use as part of the development cycle. Unfortunately, Opusmodus’s bundled version of Clozure CL (called ccl later) is stripped of the development interfaces so some packages will not compile correctly with (ql:quickload "...").

If you go hunting around for instructions on how to install QuickLisp properly with Opusmodus, there’s a few references on the forums but they’re at least a couple years old. Here’s a quick set of steps to get QuickLisp installed and loaded automatically. It’s broken into updating ccl and then what changes are necessary in Opusmodus.

Updating ccl

Note: Update any instances of <user> with your own user’s name.

  1. Pick a directory where you’d like to have the full ccl installed. A good place would be /Users/<user>/ccl/.
  2. Fetch the sources from git: git clone https://github.com/Clozure/ccl.git /Users/<user>/ccl
  3. Fetch the bootstrapping binaries: curl -L -O https://github.com/Clozure/ccl/releases/download/v1.12-dev.5/darwinx86.tar.gz
    • Note: At the time that I wrote this, Opusmodus was using ccl v1.12-dev.5 which is why the bootstrapping binaries above reference that specific release.
  4. Change directory into the ccl sources: cd /Users/<user>/ccl
  5. Unpack the bootstrapping binaries into the sources directory: tar xf ../darwinx86.tar.gz
  6. Launch ccl: ./Users/<user>/ccl/dx86cl64
  7. Now rebuild ccl: (rebuild-ccl :full t)

If all went well, you should see a lot of output about Loading …” and then at the end:

;Wrote bootstrapping image: #P"/Users/zbrown/Code/lisp/ccl-dev/x86-boot64.image"
;Building lisp-kernel ...
;Kernel built successfully.
;Wrote heap image: #P"/Users/zbrown/Code/lisp/ccl-dev/dx86cl64.image"

Configuring Opusmodus to Load QuickLisp

The default location for Opusmodus to install extensions is in /Users/<user>/Opusmodus/Extensions. Opusmodus v1.3 includes a QuickLisp Start.lisp in the extension directory with a lot of what you need to get started.

There’s a couple of specific things you need to update though to get it working:

  1. First, the entirety of the QuickLisp Start.lisp is block/multiline commented out. If you’re not familiar with the Common Lisp multilane comments, they open with #| and close with |#. In my installation of Opusmodus, the entire file was commented out.
  2. Second, you need to tell Opusmodus to use your own version of ccl instead of the bundled version of ccl. Beneath the line (in-package :Opusmodus), add this line (update the <user>): (setf (logical-pathname-translations "ccl") &apos((#P"ccl:**;*.*" #P"/Users/<user>/ccl/**/*.*")))
    • This updates the path that Opusmodus looks for ccl.
  3. Next we need to uncomment the lines that load QuickLisp and install it. They look like the following:
    • (load "http://beta.quicklisp.org/quicklisp.lisp")
    • (quicklisp-quickstart:install)

Once you’ve finished up the two sections above, you can launch Opusmodus and it should load QuickLisp. To test this, select the Listener window and install a package: (ql:quickload "cl-json"). If all goes well, you should see something like:

Welcome to Opusmodus Version 1.3 (r24952)
Copyright © MMXIX Opusmodus™ Ltd. All rights reserved.
? (ql:quickload "cl-json")
To load "cl-json":
  Load 1 ASDF system:
    cl-json
; Loading "cl-json"

("cl-json")
? 

In my case, I already had cl-json installed so nothing was downloaded.

The end

That’s it. Nothing else to describe.

August 17, 2019

Jan van den Berg (j11g)

The Death of Murat Idrissi – Tommy Wieringa August 17, 2019 03:12 PM

Tommy Wieringa is of course famous for his novel Joe Speedboot. A tremendous novel, where Wieringa demonstrates heaps of writers’ finesse. This book — the Death of Murat Idrissi — is no different. Even though this is a short and easy read, it touches on a lot of subjects and themes and has the Tommy Wieringa flair all over it. He is a master is describing brooding situations, internal struggles and setting a scene.

The death of Murat Idrissi – Tommy Wieringa (2017) – 123 pages

The post The Death of Murat Idrissi – Tommy Wieringa appeared first on Jan van den Berg.

August 16, 2019

Jan van den Berg (j11g)

Gid – Get it done! August 16, 2019 08:34 PM

Last weekend I built a personal ToDo app. Partly as an excuse to mess around a bit with all this ‘new and hip’ Web 2.0 technology (jQuery and Bootstrap) 🙈

But mostly because I needed one, and I couldn’t find a decent one.

Yes, I designed the icon myself. Can you tell?

Decent?

Decent in my opinion would be:

  1. Self hosted
  2. Self contained
  3. Use a plain text file
  4. Mobile friendly
  5. Able to track / see DONE items

And Gid does just that (and nothing more).

  1. Any PHP enabled webserver will do.
  2. No need for third party tools, everything you need is right here (Bootstrap and jQuery are included).
  3. No database setup or connection is necessary. Gid writes to plain text files that can be moved and edited by hand if needed (like todotxt.org).
  4. Works and looks decent on a smartphone.
  5. The DONE items are still visible with a strike through.

I had fun, learned quite a few new things (web development is not my day job) and me and my wife now share our grocery list with this app!

The biggest headache was getting iOS to consistently and correctly handle input form submit events. Being a touch device, this is somehow still a thing in 2019. Thanks Stack Overflow! Anyway, this is what it looks like on my iPhone.

Web development

This was mainly an interesting exercise to try to understand how PHP, Javascript/jQuery and Bootstrap work together on a rather basic level and how with Ajax you are able to manipulate the DOM. I deliberately used an older tech stack, thinking a lot of problems would be solved, however (as explained) some things still seem to be a thing. Also, what I was trying to do is just very, very basic and still I feel somehow this should be way easier! There are a lot of different technologies involved that each have their own specifics and that all have to work together.

Code is on Github: do as you please.

(P.S. Yes, the name. I know.)

The post Gid – Get it done! appeared first on Jan van den Berg.

Derek Jones (derek-jones)

Converting lines in an svg image to csv August 16, 2019 01:40 AM

During a search for data on programming language usage I discovered Stack Overflow Trends, showing an interesting plot of language tags appearing on Stack Overflow questions (see below). Where was the csv file for these numbers? Somebody had asked this question last year, but there were no answers.

Stack Overflow language tag trends over time.

The graphic is in svg format; has anybody written an svg to csv conversion tool? I could only find conversion tools for specialist uses, e.g., geographical data processing. The svg file format is all xml, and using a text editor I could see the numbers I was after. How hard could it be (it had to be easier than a png heatmap)?

Extracting the x/y coordinates of the line segments for each language turned out to be straight forward (after some trial and error). The svg generation process made matching language to line trivial; the language name was included as an xml attribute.

Programmatically extracting the x/y axis information exhausted my patience, and I hard coded the numbers (code+data). The process involves walking an xml structure and R’s list processing, two pet hates of mine (the data is for a book that uses R, so I try to do everything data related in R).

I used R’s xml2 package to read the svg files. Perhaps if my mind had a better fit to xml and R lists, I would have been able to do everything using just the functions in this package. My aim was always to get far enough down to convert the subtree to a data frame.

Extracting data from graphs represented in svg files is so easy (says he). Where is the wonderful conversion tool that my search failed to locate? Pointers welcome.

August 15, 2019

Jan van den Berg (j11g)

Create a Chrome bookmark html file to import list of URLs August 15, 2019 02:20 PM

I recently switched RSS providers and I could only extract my saved posts as a list of URLs. So I thought I’d add these to a bookmark folder in Chrome. However, Chrome bookmark import only accepts a specifically formatted .html file.

So if you have a file with all your urls, name this file ‘url.txt’ and run this script to create a .html file that you can import in Chrome (hat-tip to GeoffreyPlitt).

#!/bin/bash
#
# Run this script on a file named urls.txt with all your URLs and pipe the output to an HTML file.
# Example: ./convert_url_file.sh > bookmarks.html

echo "<!DOCTYPE NETSCAPE-Bookmark-file-1>"
echo '<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">'
echo '<TITLE>Bookmarks</TITLE>'
echo '<H1>Bookmarks</H1>'
echo '<DL><p>'
  cat urls.txt |
  while read L; do
    echo -n '    <DT><A HREF="';
        echo ''"$L"'">'"$L"'</A>';
  done
echo "</DL><p>"

The post Create a Chrome bookmark html file to import list of URLs appeared first on Jan van den Berg.

Pepijn de Vos (pepijndevos)

Open Source Formal Verification in VHDL August 15, 2019 12:00 AM

I believe in the importance of open source synthesis, and think it’s important that open source tools support both Verilog and VHDL. Even though my GSoC proposal to add VHDL support to Yosys was rejected, I’ve still been contributing small bits and pieces to GHDL and its Yosys plugin.

This week we reached what I think is an important milestone: I was able to synthesize my VHDL CPU and then formally verify the ALU of it using completely open source tools. (and then synthesize it to an FPGA which is not supported by open source tools yet) There is a lot to unpack here, so let’s jump in.

Yosys, Nextpnr, SymbiYosys, GHDL, ghdlsynth-beta

Yosys is an open source synthesis tool that is quickly gaining momentum and supporting more and more FPGAs. Yosys currently supports Verilog, and turns that into various low-level netlist representations.

Nextpnr is a place-and-rout tool, which takes a netlist and turns it into a bitstream for any of the supported FPGA types. These bitstream formats are not publicly documented, so this is a huge reverse-engineering effort.

SymbiYosys is a tool based around Yosys and various SAT solvers to let you do formal verification on your code. More on formal verification later. But important to know is that it works on the netlists generated by Yosys.

GHDL is an open source VHDL simulator, and as far as I know, one of its kind. VHDL is notoriously hard to parse, so many other open source attempts at VHDL simulation and synthesis have faltered. Work is underway to add synthesis to GHDL.

And last but not least, ghdlsynth-beta is a plugin for Yosys that converts the synthesis format of GHDL to the intermediate representation of Yosys, allowing it to be synthesized to various netlist formats and used for FPGA, ASIC, formal verification, and many other uses. It is currently a separate repository, but the goal is to eventually upstream it into Yosys.

Formal Verification

I think formal verification sounds harder and more scary than it is. An alternative description is property testing with a SAT solver. Think Quickcheck, not Coq. This is much simpler and less formal than using a proof assistent.

Basically you describe properties about your code, and SymbiYosys compiles your code an properties to a netlist and from a netlist to Boolean logic. A SAT solver is then used to find inputs to your code that (dis)satisfy the properties you described. This does not “prove” that your code is correct, but it proves that it satisfies the properties you defined.

In hardware description languages you describe properties by assertions and assumptions. An assumption constrains what the SAT solver can consider as valid inputs to your program, and assertions are things you believe to be true about your code.

The powerful thing about formal verification is that it considers all valid inputs at every step, and not just the happy case you might test in simulation. It will find so many edge cases it’s not even funny. Once you get the hang of it, it’s actually less work than writing a testbench. Just a few assertions in your code and the bugs come flying at you.

If you want to learn more about formal verification, Dan Gisselquist has a large number of articles and tutorials about it, mainly using Verilog.

Installation

To play along at home, you need to install a fair number of programs, so better get some of your favourite hot beverage.

At this point you should be able to run ghdl --synth foo.vhd -e foo which will output a VHDL representation of the synthesized netlist. You should be able to run yosys -m ghdl and use the Yosys command ghdl foo.vhd -e foo to obtain a Yosys netlist which you can then show, dump, synth, or even write_verilog.

Verifying a bit-serial ALU

To demonstrate how formal verification works and why it is so powerful, I want to walk you through the verification of the ALU of my CPU.

I’m implementing a bit-serial architecture, which means that my ALU operates on one bit at a time, producing one output bit and a carry. The carry out is then the carry in to the next bit. The logic that produces the output and the carry depends on a 3-bit opcode.

  process(opcode, a, b, ci)
  begin
    case opcode is
      when "000" => -- add
        y <= a xor b xor ci; -- output
        co <= (a and b) or (a and ci) or (b and ci); -- carry
        cr <= '0'; -- carry reset value
      -- [...]
    end case;
  end process;

  process(clk, rst_n)
  begin
    if(rising_edge(clk)) then
      if(rst_n = '0') then
        ci <= cr; -- reset the carry
      else
        ci <= co; -- copy carry out to carry in
      end if;
    end if;
  end process;

Important to note is the carry reset value. For addition, the first bit is added without carry, but for subtraction the carry is 1 because -a = (not a) + 1, and similarly for other different opcodes. So when in reset, the ALU sets the carry in to the reset value corresponding to the current opcode.

So now onward to the verification part. Since VHDL only has assert and none of the SystemVerilog goodies, Property Specification Language is used. (that link contains a good tutorial) PSL not only provides restrict, assume, and cover, but also allows you to express preconditions and sequences.

To make my life easier, I want to specify that I want to restrict valid sequences to those where the design starts in reset, processes 8 bits, back to reset, and repeat, so that the reset will look like 011111111011111111...

restrict { {rst_n = '0'; (rst_n = '1')[*8]}[+]};

Then, I want to specify that when the ALU is active, the opcode will stay constant. Else you’ll just get nonsense.

assume always {rst_n = '0'; rst_n = '1'} |=>
  opcode = last_op until rst_n = '0';

Note that I did not define any clock or inputs. Just limiting the reset and opcode is sufficient. With those assumptions in place, we can assert what the output should look like. I shift the inputs and outputs into 8-bit registers, and then when the ALU goes into reset, we can verify the output. For example, if the opcode is “000”, the output should be the sum of the two inputs.

assert always {opcode = "000" and rst_n = '1'; rst_n = '0'} |->
  y_sr = a_sr+b_sr;

After adding the other opcodes, I wrapped the whole thing in a generate block so I can turn it off with a generic parameter for synthesis

formal_gen : if formal generate
  signal last_op : std_logic_vector(2 downto 0);
  signal a_sr : unsigned(7 downto 0);
  signal b_sr : unsigned(7 downto 0);
  signal y_sr : unsigned(7 downto 0);
begin
-- [...]
end generate;

And now all that’s left to do is write the SymbiYosys script and run it. The script just specifies how to compile the files and the settings for the SAT solver. Note that -fpsl is required for reading --psl code in comments, or --std=08 to use VHDL-2008 which supports PSL as part of the core language.

[options]
mode bmc
depth 20

[engines]
smtbmc z3

[script]
ghdl --std=08 alu.vhd -e alu
prep -top alu

[files]
alu.vhd

To load the GHDL plugin, SymbiYosys has to be run as follows:

$ sby --yosys "yosys -m ghdl" -f alu.sby 
SBY 15:02:25 [alu] Removing direcory 'alu'.
SBY 15:02:25 [alu] Copy 'alu.vhd' to 'alu/src/alu.vhd'.
SBY 15:02:25 [alu] engine_0: smtbmc z3
SBY 15:02:25 [alu] base: starting process "cd alu/src; yosys -m ghdl -ql ../model/design.log ../model/design.ys"
SBY 15:02:25 [alu] base: finished (returncode=0)
SBY 15:02:25 [alu] smt2: starting process "cd alu/model; yosys -m ghdl -ql design_smt2.log design_smt2.ys"
SBY 15:02:25 [alu] smt2: finished (returncode=0)
SBY 15:02:25 [alu] engine_0: starting process "cd alu; yosys-smtbmc -s z3 --presat --noprogress -t 20 --append 0 --dump-vcd engine_0/trace.vcd --dump-vlogtb engine_0/trace_tb.v --dump-smtc engine_0/trace.smtc model/design_smt2.smt2"
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Solver: z3
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Checking assumptions in step 0..
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Checking assertions in step 0..
[...]
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Checking assumptions in step 9..
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Checking assertions in step 9..
SBY 15:02:25 [alu] engine_0: ##   0:00:00  BMC failed!
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Assert failed in alu: /179
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Writing trace to VCD file: engine_0/trace.vcd
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Writing trace to Verilog testbench: engine_0/trace_tb.v
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Writing trace to constraints file: engine_0/trace.smtc
SBY 15:02:25 [alu] engine_0: ##   0:00:00  Status: FAILED (!)
SBY 15:02:25 [alu] engine_0: finished (returncode=1)
SBY 15:02:25 [alu] engine_0: Status returned by engine: FAIL
SBY 15:02:25 [alu] summary: Elapsed clock time [H:MM:SS (secs)]: 0:00:00 (0)
SBY 15:02:25 [alu] summary: Elapsed process time [H:MM:SS (secs)]: 0:00:00 (0)
SBY 15:02:25 [alu] summary: engine_0 (smtbmc z3) returned FAIL
SBY 15:02:25 [alu] summary: counterexample trace: alu/engine_0/trace.vcd
SBY 15:02:25 [alu] DONE (FAIL, rc=2)

Oh no! We have a bug! Let’s open the trace to see what went wrong.

gtkwave alu/engine_0/trace.vcd

gtkwave trace

So we’re doing a subtraction, and according to my math 29-150=-121 but the ALU output is -122, so we’re off by one. A little head-scratching later, we can see the problem: On the first cycle of the subtraction the carry in is zero rather than one! Why? Because on the previous clock cycle the instruction was exclusive or, which reset the carry in to zero.

Note that this bug would never show up if you did a test bench that executes a fixed instruction from reset. But the SAT solver managed to find a specific sequence of opcodes that cause the carry to be wrong. Awesome.

So how do we fix it? There are two ways. The first is to change the code to asynchronously determine the carry in. The second is to write you code so the opcode is stable before the ALU comes out of reset, which ended up using less logic. In this case we can change the opcode assumption to

assume always {rst_n = '0'; rst_n = '1'} |->
  opcode = last_op until rst_n = '0';

Note that we used the thin arrow |-> rather than the fat arrow |=> now. The fat arrow triggers after the precondition has been met, while the thin arrow overlaps with the end of the precondition. So now we’re saying that when reset became inactive, the opcode is the same as it was while the device was in reset. Let’s try again.

$ sby --yosys "yosys -m ghdl" -f alu.sby 
SBY 15:31:36 [alu] Removing direcory 'alu'.
SBY 15:31:36 [alu] Copy 'alu.vhd' to 'alu/src/alu.vhd'.
SBY 15:31:36 [alu] engine_0: smtbmc z3
SBY 15:31:36 [alu] base: starting process "cd alu/src; yosys -m ghdl -ql ../model/design.log ../model/design.ys"
SBY 15:31:36 [alu] base: finished (returncode=0)
SBY 15:31:36 [alu] smt2: starting process "cd alu/model; yosys -m ghdl -ql design_smt2.log design_smt2.ys"
SBY 15:31:36 [alu] smt2: finished (returncode=0)
SBY 15:31:36 [alu] engine_0: starting process "cd alu; yosys-smtbmc -s z3 --presat --noprogress -t 20 --append 0 --dump-vcd engine_0/trace.vcd --dump-vlogtb engine_0/trace_tb.v --dump-smtc engine_0/trace.smtc model/design_smt2.smt2"
SBY 15:31:36 [alu] engine_0: ##   0:00:00  Solver: z3
SBY 15:31:36 [alu] engine_0: ##   0:00:00  Checking assumptions in step 0..
SBY 15:31:36 [alu] engine_0: ##   0:00:00  Checking assertions in step 0..
[...]
SBY 15:31:37 [alu] engine_0: ##   0:00:01  Checking assumptions in step 19..
SBY 15:31:37 [alu] engine_0: ##   0:00:01  Checking assertions in step 19..
SBY 15:31:37 [alu] engine_0: ##   0:00:01  Status: PASSED
SBY 15:31:37 [alu] engine_0: finished (returncode=0)
SBY 15:31:37 [alu] engine_0: Status returned by engine: PASS
SBY 15:31:37 [alu] summary: Elapsed clock time [H:MM:SS (secs)]: 0:00:01 (1)
SBY 15:31:37 [alu] summary: Elapsed process time [H:MM:SS (secs)]: 0:00:01 (1)
SBY 15:31:37 [alu] summary: engine_0 (smtbmc z3) returned PASS
SBY 15:31:37 [alu] DONE (PASS, rc=0)

Yay!

Debugging tips

It should be said that all of this is very experimental and you are therefore likely to run into bugs and missing features. I would say that at this point it is feasible to write new code and work around GHDL’s current limitations (or fix them!), but running large existing codebases is unlikely to be successful. (but very much the goal!)

When you run into errors, the first step is to find out if it is a bug in the plugin or GHDL itself.

If you see Unsupported(1): instance X of Y. this means the plugin does not know how to translate a GHDL netlist item to Yosys. These are usually pretty easy to fix. See this pull request for an example. Good to know: Id_Sub is defined in ghdlsynth_gates.h which is generated from netlists-gates.ads. module->addSub is defined in rtlil.h.

If you just see ERROR: vhdl import failed. this likely means GHDL crashed. Run GHDL outside Yosys (ghdl --synth) to see the actual error. Usually it’ll show something like some_package: cannot handle IIR_KIND_SOMETHING (mycode.vhd:26:8) which means that some_pacakge in the src/synth part of GHDL can’t handle some language construct yet. This can be anything from a missing operator to whole language constructs, and the fix can be anything for copy-pasting another operator to a serious project. See this pull request for an example on how to add a missing operator.

If it’s not obvious what is going on, it’s time to break out gdb. It’s important to know that in the GHDL repo there is a .gdbinit that you can source inside gdb. This enables catching exceptions and includes utilities for printing IIR values. If you want to debug inside Yosys, it is helpful to first run the program without breakpoints so all the libraries are loaded and gdb understands there is Ada code involved. Then source .gdbinit, set breakpoints, and run again. (note: GHDL/Yosys command line arguments are passed to run and not gdb)

Happy debugging!

August 12, 2019

Pete Corey (petecorey)

Fly Straight, Dammit! August 12, 2019 12:00 AM

Numberphile recently posted a video about an interesting recursive function called “Fly Straight, Dammit!” which, when plotted, initially seems chaotic, but after six hundred thirty eight iterations, instantly stabilizes.

This sounds like a perfect opportunity to flex our J muscles and plot this function ourselves!

An Imperative Solution

The simplest approach to plotting our “Fly Straight, Dammit!” graph using the J programming language is to approach things imperatively:

a =: monad define
  if. y < 2 do.
    1
  else.
    py =. a y - 1
    gcd =. y +. py
    if. 1 = gcd do.
      1 + y + py
    else.
      py % gcd
    end.
  end.
)

We’ve defined our a monadic verb to return 1 if we pass in a “base case” value of 0 or 1. Otherwise, we recursively execute a on y - 1 to get our py, or “previous y”. Next, we check if the gcd of y and py equals 1. If it does, we return 1 + y + py. Otherwise, we return py divided by gcd.

This kind of solution shouldn’t look too foreign to anyone.

Let’s plot values of a to verify our solution:

require 'plot'
'type dot' plot a"0 i. 1000

This works, but it’s very slow. We know that our recursive calls are doing a lot of duplicated work. If we could memoize the results of our calls to a, we could save quite a bit of time. Thankfully, memoizing a verb in J is as simple as adding M. to the verb’s declaration:

a =: monad define M.
  ...
)

Now our imperative solution is much faster.

Using Forks and Hooks

While our initial solution works and is fast, it’s not taking advantage of what makes J a unique and interesting language. Let’s try to change that.

The meat of our solution is computing values in two cases. In the case when y and py have a greatest common divisor equal to 1, we’re computing 1 plus y plus py. Our imperative, right to left implementation of this computation looks like this:

1 + y + py

We could also write this as a “monadic noun fork” that basically reads as “1 plus the result of x plus y:

a_a =: 1 + +

Similarly, when we encounter the case where the greatest common divisor between y and py is greater than 1, we want to compute py divided by that gcd. This can be written as a “dyadic fork”:

a_b =: [ % +.

We can read this fork as “x divided by the greatest common divisor of x and y.”

Now that we’ve written our two computations as tacit verbs, we can use the “agenda” verb (@.) to decide which one to use based on the current situation:

a_a =: 1 + +
a_b =: [ % +.

a =: monad define M.
  if. y < 2 do.
    1
  else.
    py =. a y - 1
    has_gcd =. 1 = y +. py
    py (a_b ` a_a @. has_gcd) y
  end.
)

If has_gcd is 0, or “false”, we’ll return the result of py a_b y. Otherwise, if has_gcd is 1, we’ll return the result of py a_a y.

More Agenda

We can elaborate on the idea of using agenda to conditionally pick the verb we want to apply to help simplify out base case check.

First, let’s define our base case and recursive case as verbs that we can combine into a gerund. Our base case is simple. We just want to return 1:

base_case =: 1:

Our recursive case is just the (memoized) else block from our previous example:

recursive_case =: monad define M.
  py =. a y - 1
  has_gcd =. 1 = y +. py
  py (a_b ` a_a @. has_gcd) y
)

Our function, a wants to conditionally apply either base_case or recursive_case, depending on whether y is greater or less than one. We can write that using agenda like so:

a =: base_case ` recursive_case @. (1&<)

And because our base_case verb is so simple, we can just inline it to clean things up:

a_a =: 1 + +
a_b =: [ % +.

recursive_case =: monad define M.
  py =. a y - 1
  has_gcd =. 1 = y +. py
  py (a_b ` a_a @. has_gcd) y
)

a =: 1: ` recursive_case @. (1&<)

Using agenda to build conditionals and pseudo-“case statements” can be a powerful tool for incorporating conditionals into J programs.

Going Further

It’s conceivable that you might want to implement a tacit version of our recursive_case. Unfortunately, my J-fu isn’t strong enough to tackle that and come up with a sane solution.

That said, Raul Miller came up with a one-line solution (on his phone) and posted it on Twitter. Raul’s J-fu is strong.

Marc Brooker (mjb)

Kindness, Wickedness and Safety August 12, 2019 12:00 AM

Kindness, Wickedness and Safety

We must build kind systems.

David Epstein's book Range: Why Generalists Triumph in a Specialized World turned me on to the idea of Kind and Wicked learning environments, and I've found the idea to be very useful in framing all kinds of problems.1 The idea comes from The Two Settings of Kind and Wicked Learning Environments. The abstract gets right to the point:

Inference involves two settings: In the first, information is acquired (learning); in the second, it is applied (predictions or choices). Kind learning environments involve close matches between the informational elements in the two settings and are a necessary condition for accurate inferences. Wicked learning environments involve mismatches.

The authors go on to describe the two environments in terms of the information that we can learn from L (for learning), and information that we use when we actually have to make predictions T (for target). They break environments down into kind or wicked depending on how L relates to T. In kind environments, L and T are closely related: if you learn a rule from L it applies at least approximately to T. In wicked environments, L is a subset or superset of T, or the sets intersect only partially, or are completely unrelated.

Simplifying this a bit more, in kind environments we can learn the right lessons from experience, in wicked environment we learn the wrong lessons (or at least incomplete lessons).

From the paper again:

If kind, we have the necessary conditions for accurate inference. Therefore, any errors must be attributed to the person (e.g., inappropriate information aggregation). If wicked, we can identify how error results from task features, although these can also be affected by human actions. In short, our framework facilitates pinpointing the sources of errors (task structure and/or person).

This has interesting implications for thinking about safety, and the role of operators (and builders) in ensuring safety. In kind environments, operator mistakes can be seen as human error, where the human learned the wrong lesson or did the wrong thing. In wicked environments, humans will always make errors, because there are risks that are not captured by their experience.

Going back to Anatoly Dyatlov's question to the IAEA:

How and why should the operators have compensated for design errors they did not know about?

Dyatlov is saying that operating Chernobyl was a wicked environment. Operators applying their best knowledge and experience, even flawlessly, weren't able to make the right inferences about the safety of the system.

Back to the paper:

Since kind environments are a necessary condition for accurate judgments, our framework suggests deliberately creating kind environments.

I found reading this to be something of a revelation. When building safe systems, we need to make those systems kind. We need to deliberately create kind evironments. If we build them so they are wicked, then we set our operators, tooling and automation up for failure.

Some parts of our field are inherently wicked. In large and complex systems the set of circumstances we learn from is always incomplete, because the system has so many states that there's no way to have seen them all before. In security, there's an active attacker who's trying very hard to make the environment wicked.

The role of the designer and builder of systems is to make the environment as kind as possible. Extract as much wickedness as possible, and try not to add any.

Footnotes

  1. The book is worth reading. It contains a lot of interesting ideas, but like all popular science books also contains a lot of extrapolation beyond what the research supports. If you're pressed for time, the EconTalk episode about the book covers a lot of the material.

August 11, 2019

Derek Jones (derek-jones)

My book’s pdf generation workflow August 11, 2019 11:50 PM

The process used to generate the pdf of my evidence-based software engineering book has been on my list of things to blog about, for ever. An email arrived this afternoon, asking how I produced various effects using Asciidoc; this post probably contains rather more than N. Psaris wanted to know.

It’s very easy to get sucked into fiddling around with page layout and different effects. So, unless I am about to make a release of a draft, I only generate a pdf once, at the end of each month.

At the end of the month the text is spell checked using aspell, and then grammar checked using Language tool. I have an awk script that checks the text for mistakes I have made in the past; this rarely matches, i.e., I seem to be forever making different mistakes.

The sequencing of tools is: R (Sweave) -> Asciidoc -> docbook -> LaTeX -> pdf; assorted scripts fiddle with the text between outputs and inputs. The scripts and files mention below are available for download.

R generates pdf files (via calls to the Sweave function, I have never gotten around to investigating Knitr; the pdfs are cropped using scripts/pdfcrop.sh) and the ascii package is used to produce a few tables with Asciidoc markup.

Asciidoc is the markup language used for writing the text. A few years after I started writing the book, Stuart Rackham, the creator of Asciidoc, decided to move on from working and supporting it. Unfortunately nobody stepped forward to take over the project; not a problem, Asciidoc just works (somebody did step forward to reimplement the functionality in Ruby; Asciidoctor has an active community, but there is no incentive for me to change). In my case, the output from Asciidoc is xml (it supports a variety of formats).

Docbook appears in the sequence because Asciidoc uses it to produce LaTeX. Docbook takes xml as input, and generates LaTeX as output. Back in the day, Docbook was hailed as the solution to all our publishing needs, and wonderful tools were going to be created to enable people to produce great looking documents.

LaTeX is the obvious tool for anybody wanting to produce lovely looking books and articles; tex/ESEUR.tex is the top-level LaTeX, which includes the generated text. Yes, LaTeX is a markup language, and I could have written the text using it. As a language I find LaTeX too low level. My requirements are not complicated, and I find it easier to write using a markup language like Asciidoc.

The input to Asciidoc and LuaTeX (used to generate pdf from LaTeX) is preprocessed by scripts (written using sed and awk; see scripts/mkpdf). These scripts implement functionality that Asciidoc does not support (or at least I could see how to do it without modifying the Python source). Scripts are a simple way of providing the extra functionality, that does not require me to remember details about the internals of Asciidoc. If Asciidoc was being actively maintained, I would probably have worked to get some of the functionality integrated into a future release.

There are a few techniques for keeping text processing scripts simple. For instance, the cost of a pass over text is tiny, there is little to be gained by trying to do everything in one pass; handling the possibility that markup spans multiple lines can be complicated, a simple solution is to join consecutive lines together if there is a possibility that markup spans these lines (i.e., the actual matching and conversion no longer has to worry about line breaks).

Many simple features are implemented by a script modifying Asciidoc text to include some ‘magic’ sequence of characters, which is subsequently matched and converted in the generated LaTeX, e.g., special characters, and hyperlinks in the pdf.

A more complicated example handles my desire to specify that a figure appear in the margin; the LaTeX sidenotes package supports figures in margins, but Asciidoc has no way of specifying this behavior. The solution was to add the word “Margin”, to the appropriate figure caption option (in the original Asciidoc text, e.g., [caption="Margin ", label=CSD-95-887]), and have a script modify the LaTeX generated by docbook so that figures containing “Margin” in the caption invoked the appropriate macro from the sidenotes package.

There are still formatting issues waiting to be solved. For instance, some tables are narrow enough to fit in the margin, but I have not found a way of embedding this information in the table information that survives through to the generated LaTeX.

My long time pet hate is the formatting used by R’s plot function for exponentiated values as axis labels. My target audience are likely to be casual users of R, so I am sticking with basic plotting (i.e., no calls to ggplot). I do wish the core R team would integrate the code from the magicaxis package, to bring the printing of axis values into the era of laser printers and bit-mapped displays.

Ideas and suggestions welcome.

Ponylang (SeanTAllen)

Last Week in Pony - August 11, 2019 August 11, 2019 01:31 PM

Last Week In Pony is a weekly blog post to catch you up on the latest news for the Pony programming language. To learn more about Pony check out our website, our Twitter account @ponylang, or our Zulip community.

Got something you think should be featured? There’s a GitHub issue for that! Add a comment to the open “Last Week in Pony” issue.

Gustaf Erikson (gerikson)

Introducing HN&&LO; August 11, 2019 11:58 AM

HN&&LO is a web page that scrapes the APIs of Hacker News and Lobste.rs and collates those entries that share a URL.

This lets you easily follow the discussion on both sites for these links.

I wrote this tool because I’m much more active on Lobste.rs than on HN, but I would like to keep “tabs” on both sites. I’m especially interested in how the information is disseminated between the sites.

“Slow webapps”

I’ve long liked to grab stuff via a web API, stuff it into a DB, and output a web page based on the data. This project was a first attempt to use a templating engine in Perl, and I’m ashamed to say I’ve taken so long to understand the greatness of this approach.

The data updates hourly, there’s rarely more than a couple of new entries on Lobste.rs per hour.

In fact, this can illustrate the difference in scale between the two sites.

Time period: 5 Jul 2019 11:19 to 8 Aug 2019 23:59 (both times in UTC).

  • Number of submissions with URLS on Hacker News: 26,974
  • Number of submissions on Lobste.rs: 899

Average number of submissions per hour: 33 for HN, 1 for Lobste.rs.

Once updated, a static web page is generated. This keeps overhead low.

Differences between Hacker News and Lobste.rs

Coincidentally, while I was writing this post, an article was published in the New Yorker about the moderators (all two of them) of Hacker News. This sparked a discussion on HN (obviously) but also on Lobster.rs (see both linked here).

To me, what makes Lobste.rs better is that it’s much smaller (one can easily carch up with a day’s worth of comments), the editing tools are more sophisticated, and the tag system allows one to determine whether a post is interesting or not. HN is much harder to grasp.

I think the tendencies noted in the article are common to both sites - male, youngish software developers with a rationalist bent. But I think Lobste.rs is a bit more self-aware of this fact (or rather, critiques are more visible there). In some ways, Lobste.rs, being a reaction to HN, defines itself culturally both for and against the bigger site.

Source and TODO

If you’re interested in pedestrian Perl code, source is here.

TODO list is here

August 10, 2019

Jan van den Berg (j11g)

Generation X – Douglas Coupland August 10, 2019 09:04 AM

I am a Douglas Coupland fan. And I think his debut Generation X still holds up as one of his best novels. I probably read it for the first time over ten years ago. And I have since then read several other Coupland novels. (I also reviewed jPod extensively in 2007 on my Dutch blog). So I am quite familiar with his unique style, which is a large part of the attraction. However rereading this book was quite the revelation.

Generation X – Douglas Coupland (1991) – 253 pages

First, to my own shock, I barely seemed to remember the main story (sure, three friends in a desert town, but that was about it). This might be fuel for an entire blogpost on this subject (“What good is reading when you forget? Does this depend on the story or author? etc.”).

But second, when I was reading I had to double check when this book was written — yes, really 1991! The story also takes place around that time. Sure, there a few outdated references, but mainly it struck me how relevant, fresh and on-point Coupland already was in describing and predicting modern society (our society) and our relationships — with each other or technology — which is a main Coupland theme throughout all of his work. Remember this book was written pre-www (pre-grunge!). So it was definitely a different time. But the characters and their stories hold up well, because it is mainly about human interaction. And the struggle these Generation X characters face in their search for meaning, might be even more relevant today.

So rereading this book left me with even more respect for Coupland as an astute and perceptive writer.

The post Generation X – Douglas Coupland appeared first on Jan van den Berg.

The Trumpet of Conscience – Dr. Martin Luther King Jr. August 10, 2019 08:55 AM

Martin Luther King Jr. was only 39 years (and 2 months and 19 days) old when he was murdered. Thirty-nine. I never realised this — until I am 39 myself now.

When he died he had already received a Nobel prize and over 100 honorary degrees from all over the world, but more importantly, he had changed America forever.

The Trumpet of Conscience – Dr. Martin Luther King Jr. (1968) – 93 pages

Much has been written about MLK and by MLK. And it is especially the latter I am interested in.

This small book is a collection of (Christmas) lectures he wrote and presented on Canadian radio in 1967 — the so-called famous Massey Lectures. This collection itself was published less than a month after he was murdered (which makes reading the foreword by his wife and widow quite chilling to read).

The Dutch translation is a bit rough and outdated. But if you ever heard MLK speak there is very little imagination necessary to hear the man’s voice. It is so powerful that it cuts through time and languages.

The themes he discusses are familiar, but what is most prominent is the very, very long journey that he still saw ahead and that he was not so sure we would ever get there. These lectures help in understanding what MLK saw as the ultimate goal and also pinpoint the root of the problem. A goal which is sadly enough still utterly relevant. And a root cause that is still around.

You don’t have to read the book, here is a YouTube playlist I created with all five speeches where you hear the man himself.

The post The Trumpet of Conscience – Dr. Martin Luther King Jr. appeared first on Jan van den Berg.

Andrew Montalenti (amontalenti)

JavaScript: The Modern Parts August 10, 2019 07:00 AM

In the last few months, I have learned a lot about modern JavaScript and CSS development with a local toolchain powered by Node 8, Webpack 4, and Babel 7. As part of that, I am doing my second “re-introduction to JavaScript”. I first learned JS in 1998. Then relearned it from scratch in 2008, in the era of “The Good Parts”, Firebug, jQuery, IE6-compatibility, and eventually the then-fledgling Node ecosystem. In that era, I wrote one of the most widely deployed pieces of JavaScript on the web, and maintained a system powered by it.

Now I am re-learning it in the era of ECMAScript (ES6 / ES2017), transpilation, formal support for libraries and modularization, and, mobile web performance with things like PWAs, code splitting, and WebWorkers / ServiceWorkers. I am also pleasantly surprised that JS, via the ECMAScript standard and Babel, has evolved into a pretty good programming language, all things considered.

To solidify all this stuff, I am using webpack/babel to build all static assets for a simple Python/Flask web app, which ends up deployed as a multi-hundred-page static site.

One weekend, I ported everything from Flask-Assets to webpack, and to play around with ES2017 features, as well as explore the Sass CSS preprocessor and some D3.js examples. And boy, did that send me down a yak shaving rabbit hole. Let’s start from the beginning!

JavaScript in 1998

I first learned JavaScript in 1998. It’s hard to believe that this was 20 years — two decades! — ago. This post will chart the two decades since — covering JavaScript in 1998, 2008, and 2018. The focus of the article will be on “modern” JavaScript, as of my understanding in 2018/2019, and, in particular, what a non-JavaScript programmer should know about how the language — and its associated tooling and runtime — have dramatically evolved. If you’re the kind of programmer who thinks, “I code in Python/Java/Ruby/C/whatever, and thus I have no use for JavaScript and don’t need to know anything about it”, you’re wrong, and I’ll describe why. Incidentally, you were right in 1998, you could get by without it in 2008, and you are dead wrong in 2018.

Further, if you are the kind of programmer who thinks, “JavaScript is a tire fire I’d rather avoid because it lacks basic infrastructure we take for granted in ‘real’ programming languages”, then you are also wrong. I’ll be able to show you how “not taking JavaScript seriously” is the 2018 equivalent of the skeptical 2008-era programmer not taking Python or Ruby seriously. JavaScript is a language that is not only here to stay, but has already — and will continue to — take over the world in several important areas. To be a serious programmer, you’ll have to know JavaScript’s Modern and Good Parts — as well as some other server-side language, like Python, Ruby, Go, Elixir, Clojure, Java, and so on. But, though you can swap one backend language for the other, you can’t avoid JavaScript: it’s pervasive in every kind of web deployment scenario. And, the developer tooling has fully caught up to your expectations.

JavaScript during The Browser Wars

Browsers were a harsh environment to target for development; not only was Internet adoption low and not only were internet connections slow, but the browser wars — mainly between Netscape and Microsoft — were creating a lot of confusion.

Netscape Navigator 4 was released in 1997, and Internet Explorer 5 was released in 1998. The web was still trying to make sense of HTML and CSS; after all, CSS1 had only been released a year earlier.

In this environment, the definitive web development book of the era was “JavaScript: The Definitive Guide”, which weighed in at over 500 pages. Note that, in 1998, the most widely used programming languages were C, C++, and Java, as well as Microsoft Visual Basic for Windows programmers. So expectations about “what programming was” were framed mostly around these languages.

In this sense, JavaScript was quite, quite different. There was no compiler. There was no debugger (at least, not very good ones). There was no way to “run a JavaScript program”, except to write scripts in your browser, and see if they ran. Development tools for JavaScript were still primitive or inexistent. There was certainly not much of an open source community around JS; to figure out how to do things, you would typically “view source” on other people’s websites. Plus, much of the discussion in the programming community of web developers was how JavaScript represented a compatibility and security nightmare.

Not only differing implementations across browsers, but also many ways for you to compromise the security of your web application by relying upon JavaScript too directly. A common security bug in that era was to validate forms with JavaScript, but still allow invalid (and insecure) values to be passed to the server. Or, to password-protect a system, but in a way that inspection of JavaScript code could itself crack access to that system. Combined with the lack of a proper development environment, the “real web programmers” used JavaScript as nothing more than a last resort — a way to inject a little bit of client-side code and logic into pages where doing it server-side made no sense. I remember one of the most common use cases for JavaScript at the time was nothing more than changing an image upon hover, as a stylistic effect, or implementing a basic on-hover menu on a complex multi-tab form. These days, these tasks can be achieved with vanilla CSS, but, at the time, JavaScript DOM manipulation was your only option.

JavaScript in 2008

Fast forward 10 years. In 2008, Douglas Crockford released the book, “JavaScript: The Good Parts”. By using a language subsetting approach, Crockford pointed out that, not only was JavaScript not a bad language, it was actually a good language, well-designed, with certain key features that made it stand out vs competitors.

Around this time, several JavaScript libraries were becoming popular, notably jQuery, Prototype, YUI, and Dojo. These libraries attempted to provide JavaScript with something it was missing: a cross-browser compatibility layer and programming model for doing dynamic manipulation of pages inside the browser, and especially for a new model of JavaScript programming that was emerging, with the moniker AJAX. This was the beginning of the trend of rich internet applications, “dynamic” web apps, single-page applications, and the like.

JavaScript’s Tooling Leaps

The developer tooling for JavaScript also took some important leaps. In 2006, the Firefox team released Firebug, a JavaScript and DOM debugger for Firefox, which was then one of the world’s most popular web browsers, and open source. Two years later, Google would make the first release of Google Chrome, which bundled some developer tooling. Around the same time that Chrome was released, Google also released V8, the JavaScript engine that was embedded inside of Chrome. That marked the first time that the world had seen a full-fledged, performant open source implementation of the JavaScript language that was not completely tied to a browser. Firefox’s JS engine, SpiderMonkey, was part of its source tree, but was not necessarily marketed to be modularized and used outside the context of the Firefox browser.

I remember that aside from Crockford’s work on identifying the good parts of JavaScript, and aside from the new (and better) developer tooling, a specific essay on Mozilla’s website helped me re-appreciate the language, and throw away my 1998 conception. That article was called “A Reintroduction to JavaScript”. It showed how JavaScript was actually a real programming language, once you got past the tooling bumps. A little under-powered in its standard library, thus you had to rely upon frameworks (like jQuery) to give you some tools, and little micro-libraries beyond that.

A year after reading that essay, I wrote my own about JavaScript, which was called “Real, Functional Programs with JavaScript” (archived PDF here). It described how JavaScript was, quite surprisingly, more of a functional language than Java 8 or Python 2.7. And that with a little focus on understanding the functional core, really good programs could be written. I recently converted this essay into a set of instructional slides with the name, “Lambda JavaScript” (archived notes here), which I now use to teach new designers/developers the language from first principles.

But, let’s return to history. Only a year after the release of Chrome, in 2009, we saw the first release of NodeJS, which took the V8 JavaScript engine and embedded it into a server-side environment, which could be used to both experiment with JavaScript on a REPL, write scripts, and write HTTP servers on a performant event loop.

People began to experiment with command-line tools written in JavaScript, and with web frameworks written in JavaScript. It was at this point that the pace of development in the JavaScript community accelerated. In 2010, npm — the Node Package Manager — was released, and it and its package registry quickly grew to represent the full JavaScript open source community. Over the next few years, the browser vendors of Mozilla, Google, Apple, and Microsoft engaged in the “JavaScript Engine Wars”, with each developing SpiderMonkey, V8, Nitro, and Chakra to new heights.

Meanwhile, NodeJS and V8 became the “standard” JS engine running on developer’s machines from the command line. Though developers still had to target old “ECMAScript 3” browsers (such as IE6), and thus had to write restrained JavaScript code, the “evergreen” (auto-updating) browsers from Mozilla, Google, and Apple gained support for ECMAScript 5 and beyond, and mobile web browsing went into ascendancy, thus making Chrome and Safari dominant in market share especially on smartphones.

In all of this noise of web 2.0, cloud, and mobile, we finally reached “mobilegeddon” in 2015, where mobile traffic surpassed desktop traffic, and we also saw several desktop operating systems move to a mostly-evergreen model, such as Windows 10, Mac OS X, and ChromeOS. As a result, as early as 2015 — but certainly by 2018 — JavaScript became the most widely deployed and performant programming language with “built-in support” on almost every desktop and mobile computer in the world.

In other words, if you want your code to be “write once, run everywhere” in 2018, your best option is JavaScript.

JavaScript in 2018-2019

In 2018-2019, several things have changed about the JavaScript community. Development tools are no longer fledgling, but are, instead, mature. There are built-in development tools in all of Safari, Firefox, and Chrome browsers (and the Firebug project is mostly deprecated). There are also ways to debug mobile web browsers using mobile development tools. NodeJS and npm are mature projects that are shared infrastructure for the whole JavaScript community.

What’s more, JavaScript, as a language, has evolved. It’s no longer just the kernel language we knew in 1998, nor the “good parts” we knew in 2008, but instead the “modern parts” of JavaScript include several new language features that go by the name “ES6” (ECMAScript v6) or “ES2017” (ECMAScript 2017 Edition), and beyond.

Some concepts in HTML have evolved, such as HTML5 video and audio elements. CSS, too, has evolved, with the CSS2 and CSS3 specifications being ratified and widely adopted. JSON has all but entirely replaced XML as an interchange format and is, of course, JavaScript-based.

The V8 engine has also gotten a ton of performance-oriented development. It is now a JIT compiled language with speedy startup times and speedy near-native performance for CPU-bound blocks. Modern web performance techniques are almost entirely based on a speedy JavaScript engine and the ability to script different elements of a web application’s loading approach.

The language itself has become comfortable with something akin to “compiler” and “command line” toolchains you might find in Python, Ruby, C, and Java communities. In lieu of a JavaScript “compiler”, we have node, JavaScript unit testing frameworks like Mocha/Jest, as well as eslint and babel for syntax checking. (More on this later.)

In lieu of a “debugger”, we have the devtools built into our favorite browser, like Chrome or Firefox. This includes rich debuggers, REPLs/consoles, and visual inspection tools. Scriptable remote connections to a node environment or a browser process (via new tools like Puppeteer) further close the development loop.

To use JavaScript in 2018, therefore, is to adopt a system that has achieved 2008-era maturity that you would see in programming ecosystems like Python, Ruby, and Java. But, in many ways, JavaScript has surpassed those communities. For example, where Python 3’s reference implementation, CPython, is certainly fast as far as dynamic languages go, JavaScript’s reference implementation, V8, is optimized by JIT and hotspot optimization techniques that are only found in much older programming communities, such as Java’s. That means that unmodified, hotspot JavaScript code can be optimized into native code automatically by the Node runtime and by browsers such as Chrome.

Whereas Java and C users may still have debates about where, exactly, open source projects should publish their releases, that issue is settled in the JavaScript community: it’s npm, which operates similarly to PyPI and pip in the Python community.

Some essential developer tooling issues were only recently settled. For example, because modern JavaScript (such as code written using ES2017 features) needs to target older browsers, a “transpilation” toolchain is necessary, to compile ES2017 code into ES3 or ES5 JavaScript code, suitable for older browsers. Because “old JavaScript” is a Turing complete, functional programming language, we know we can translate almost any new “syntactic sugar” to the old language, and, indeed, the designers of the new language features are being careful to only introduce syntax that can be safely transpiled.

What this means, however, is that to do JavaScript development “The Modern Way”, while adopting its new features, you simply must use a local transpiler toolchain. The community standard for this at the moment is known as babel, and it’s likely to remain the community standard well into the future.

Another issue that plagued 2008-era JavaScript was build tooling and modularization. In the 2008-2012 era, ad-hoc tools like make were used to concatenate JavaScript modules together, and often Java-based tools such as Google’s Closure Compiler or UglifyJS were used to assemble JavaScript projects into modules that could be included onto pages. In 2012, the Grunt tool was released as a JavaScript build tool, written atop NodeJS, runnable from the command-line, and configurable using a JavaScript “Gruntfile”. A whole slew of build tools similar to this were released in the period, creating a whole lot of code churn and confusion.

Thankfully, today, Single Page Application frameworks like React have largely solved this problem, with the ascendancy of webpack and the reliance on npm run-script. Today, the webpack community has come up with a sane approach to JavaScript modularization that relies upon the modern JS support for modules, and then development-time tooling, provided mainly through the webpack CLI tool, allow for local development and production builds. This can all be scripted and wired together with simple npm run-script commands. And since webpack can be itself installed by npm, this keeps the entire development stack self-contained in a way that doesn’t feel too dissimilar from what you might enjoy with lein in Clojure or python/pip in Python.

Yes, it has taken 20 years, but JavaScript is now just as viable a choice for your backend and CLI tooling projects as Python was in the past. And, for web frontends, it’s your only choice. So, if you are a programmer who cares, at all, about distribution of your code to users, it’s time to care about JavaScript!

In a future post, I plan to go even deeper on JavaScript, covering:

  • How to structure your first “modern” JavaScript project
  • Using Modern JS with Python’s Flask web framework for simple static sites
  • Understanding webpack, and why it’s important
  • Modules, modules, modules. Why JS modules matter.
  • Understanding babel, and why it’s important
  • Transpilation, and how to think about evolving JS/ES features and “compilation”
  • Using eslint for bonus points
  • Using sourcemaps for debugging
  • Using console.assert and console for debugging
  • Production minification with uglify
  • Building automated tests with jest
  • Understanding the value of Chrome scripting and puppeteer

Want me to keep going? Let me know via @amontalenti on Twitter.

August 09, 2019

Derek Jones (derek-jones)

Growth and survival of gcc options and processor support August 09, 2019 02:43 PM

Like any actively maintained software, compilers get more complicated over time. Languages are still evolving, and options are added to control the support for features. New code optimizations are added, which don’t always work perfectly, and options are added to enable/disable them. New ways of combining object code and libraries are invented, and new options are added to allow the desired combination to be selected.

The web pages summarizing the options for gcc, for the 96 versions between 2.95.2 and 9.1 have a consistent format; which means they are easily scrapped. The following plot shows the number of options relating to various components of the compiler, for these versions (code+data):

Number of options supported by various components of gcc, over 20 years.

The total number of options grew from 632 to 2,477. The number of new optimizations, or at least the options supporting them, appears to be leveling off, but the number of new warnings continues to increase (ah, the good ol’ days, when -Wall covered everything).

The last phase of a compiler is code generation, and production compilers are generally structured to enable new processors to be supported by plugging in an appropriate code generator; since version 2.95.2, gcc has supported 80 code generators.

What can be added can be removed. The plot below shows the survival curve of gcc support for processors (80 supported cpus, with support for 20 removed up to release 9.1), and non-processor specific options (there have been 1,091 such options, with 214 removed up to release 9.1); the dotted lines are 95% confidence internals.

Survival curve of gcc options and support for specific processors, over 20 years.

Oliver Charles (ocharles)

Who Authorized These Ghosts!? August 09, 2019 12:00 AM

Recently at CircuitHub we’ve been making some changes to how we develop our APIs. We previously used Yesod with a custom router, but we’re currently exploring Servant for API modelling, in part due to it’s potential for code generation for other clients (e.g., our Elm frontend). Along the way, this is requiring us to rethink and reinvent previously established code, and one of those areas is authorization.

To recap, authorization is

the function of specifying access rights/privileges to resources related to information security and computer security in general and to access control in particular.

This is in contrast to authentication, which is the act of showing that someone is who they claim to be.

Authorization is a very important process, especially in a business like CircuitHub where we host many confidential projects. Accidentally exposing this data could be catastrophic to both our business and customers, so we take it very seriously.

Out of the box, Servant has experimental support for authorization, which is a good start. servant-server gives us Servant.Server.Experimental.Auth which makes it a doddle to plug in our existing authorization mechanism (cookies & Redis). But that only shows that we know who is asking for resources, how do we check that they are allowed to access the resources?

As a case study, I want to have a look at a particular end-point, /projects/:id/price. This endpoint calculates the pricing options CircuitHub can offer a project, and there are few important points to how this endpoint works:

  1. The pricing for a project depends on the user viewing it. This is because some users can consign parts so CircuitHub won’t order them. Naturally, this affects the price, so pricing is viewer dependent.
  2. Some projects are owned by organizations, and should be priced by the organization as a whole. If a user is a member of the organization that owns the project pricing has been requested for, return the pricing for the organization. If the user is not in the organization, return their own custom pricing.
  3. Private projects should only expose their pricing to superusers, the owner of the project, and any members of the project’s organization (if it’s owned by an organization).

This specification is messy and complicated, but that’s just reality doing it’s thing.

Our first approach was to try and represent this in Servant’s API type. We start with the “vanilla” route, with no authentication or authorization:

Next, we add authorization:

At this point, we’re on our own - Servant offers no authorization primitives (though there are discussions on this topic).

My first attempt to add authorization to this was:

There are two new routing combinators here: AuthorizeWith and CanView. The idea is AuthorizeWith somehow captures the result of authenticating, and provides that information to CanView. CanView itself does some kind of authorization using a type class based on its argument - here Capture "id" ProjectId. The result is certainly something that worked, but I was unhappy with both the complexity to implement it (which is scope to get it wrong), and the lack of actual evidence of authorization.

The latter point needs some expanding. What I mean by “lacking evidence” is that with the current approach, the authorization is essentially like writing the following code:

If I later add more resource access into doThings, what will hold me accountable to checking authorization on those resources? The answer is… nothing! This is similar to boolean blindless - we performed logical check, only to throw all the resulting evidence away immediately.

At this point I wanted to start exploring some different options. While playing around with ideas, I was reminded of the wonderful paper “Ghosts of Departed Proofs”, and it got me thinking… can we use these techniques for authorization?

Ghosts of Departed Proofs

The basic idea of GDP is to name values using higher-rank quantification, and then - in trusted modules - produce proofs that refer to these names. To name values, we introduce a Named type, and the higher-ranked function name to name things:

Note that the only way to construct a Named value outside of this module is to use name, which introduces a completely distinct name for a limited scope. Within this scope, we can construct proofs that refer to these names. As a basic example, we could use GDP to prove that a number is prime:

Here we have our first proof witness - IsPrime. We can witness whether or not a named Int is prime using checkPrime - like the boolean value isPrime this determines if a number is or isn’t prime, but we get evidence that we’ve checked a specific value for primality.

This is the whirlwind tour of GDP, I highly recommend reading the paper for a more thorough explanation. Also, the library justified-containers explores these ideas in the context of maps, where we have proofs that specific items are in the map (giving us total lookups, rather than partial lookups).

GDP and Authorization

This is all well and good, but how does this help with authorization? The basic idea is that authorization is itself a proof - a proof that we can view or interact with resources in a particular way. First, we have to decide which functions need authorization - these functions will be modified to require proof values the refer to the function arguments. In this example, we’ll assume our Servant handler is going to itself make a call to the price :: ProjectId -> UserId -> m Price function. However, given the specification above, we need to make sure that user and project are compatible. To do this, we’ll name the arguments, and then introduce a proof that the user in question can view the project:

But what is this CanViewProject proof?

A first approximation is to treat it as some kind of primitive or axiom. A blessed function can postulate this proof with no further evidence:

This is a good start! Our price function can only be called with a CanViewProject that matches the named arguments, and the only way to construct such a value is to use canViewProject. Of course we could get the implementation of this wrong, so we should focus our testing efforts to make sure it’s doing the right thing.

However, the Agda programmer in me is a little unhappy about just blindly postulating CanViewProject at the end. We’ve got a bit of vision back from our boolean blindness, but the landscape is still blurry. Fortunately, all we have to do is recruit more of the same machinery so far to subdivide this proof into smaller ones:

Armed with these smaller authorization primitives, we can build up our richer authorization scheme:

Now canViewProject just calls out to the other authorization routines to build it’s proof. Furthermore, there’s something interesting here. CanViewProject doesn’t postulate anything - everything is attached with a proof of the particular authorization case. This means that we can actually open up the whole CanViewProject module to the world - there’s no need to keep anything private. By doing this and allowing people to pattern match on CanViewProject, authorization results become reusable - if something else only cares that a user is a super user, we might be able to pull this directly out of CanViewProject - no need for any redundant database checks!

In fact, this very idea can help us implement the final part of our original specification:

Some projects are owned by organizations, and should be priced by the organization as a whole. If a user is a member of the organization that owns the project pricing has been requested for, return the pricing for the organization. If the user is not in the organization, return their own custom pricing.

If we refine our UserBelongsToProjectOrganization proof, we can actually maintain a bit of extra evidence:

Now whenever we have a proof UserBelongsToProjectOrganization, we can pluck out the actual organization that we’re talking about. We also have evidence that the organization owns the project, so we can easily construct a new CanViewProject proof - proofs generate more proofs!

Relationship to Servant

At the start of this post, I mentioned that the goal was to integrate this with Servant. So far, we’ve looked at adding authorization to a single function, so how does this interact with Servant? Fortunately, it requires very little to change. The Servant API type is authorization free, but does mention authentication.

It’s only when we need to call our price function do we need to have performed some authorization, and this happens in the server-side handler. We do this by naming the respective arguments, witnessing the authorization proof, and then calling price:

Conclusion

That’s where I’ve got so far. It’s early days so far, but the approach is promising. What I really like is there is almost a virtual slider between ease and rigour. It can be easy to get carried away, naming absolutely everything and trying to find the most fundamental proofs possible. I’ve found so far that it’s better to back off a little bit - are you really going to get some set membership checks wrong? Maybe. But a property check is probably gonig to be enough to keep that function in check. We’re not in a formal proof engine setting, pretending we are just makes things harder than they need to be.

August 08, 2019

Jan van den Berg (j11g)

A Supposedly Fun Thing I’ll Never Do Again – David Foster Wallace August 08, 2019 11:36 AM

David Foster Wallace can write. I mean, he can really write. In related news: water is wet. This man’s writing struck me as an epiphany, a beacon of light, a clear and unmistakable differentiator between merely good writing and exceptional writing.

A Supposedly Fun Thing I’ll Never Do Again – David Foster Wallace (1997) – 353 pages

I have known about DFW for some time now, and I have seen his famous commencement speech several times. It strongly resonates with me. As some other interviews do. But his writing? It seemed intimidating.

Infinite Jest, his magnum opus, is this famous thousand page multi-layered beast of a book. So I thought I start with something lighter. ‘A Supposedly Fun Thing…’ is a collection of essays and so it seemed like a good starting place.

It is a collection of 7 stories and essays on tennis, state fairs, TV, irony, David Lynch and a very entertaining cruise among other things. (Each story could validate a blogpost by itself — there is just so much there). Wallace demonstrates with academic skill his philosophical insights on modern life with the essays about other writers, TV and irony. But he is, just as easily, able to make you scream with laughter when he describes a highly anticipated and ultimately disappointing experience with the dessert tasting booth at the state fair. This man could seemingly do anything with a pen.

The words, and sentences (and footnotes!) all just seem to ooze effortlessly out of him. His voice is radically clear and distinct and his vocabulary and attention to detail are unmatched. It is very obvious Wallace operated on a different level, intellectually and talent wise. And I often stopped reading and wondered about how his depression got the best of him in the end, and whether this much talent and severe depression are somehow two sides of the same coin. Because judging by his writing, I don’t think he experienced the world the same way most people do (whatever that is).

The first thing I did after finishing this book, was head to a bookshop where I bought Infinite Jest. It still looks intimidating, but I can now only assume it must be a definitely fun thing to read.

The post A Supposedly Fun Thing I’ll Never Do Again – David Foster Wallace appeared first on Jan van den Berg.

August 07, 2019

Gonçalo Valério (dethos)

Staying on an AirBnB? Look for the cameras August 07, 2019 05:04 PM

When going on a trip it is now common practice to consider staying on an rented apartment or house instead of an hotel or hostel, mostly thanks to AirBnB which made it really easy and convenient for both side of the deal. Most of the time the price is super competitive and I would say a great fit for many situations.

However as it happens with almost anything, it has its own set of problems and challenges. One example of these new challenges are the reports (and confirmations) that some, lets call them malicious hosts, have been putting in place cameras to monitor the guests during their stay.

With a small search on the internet you can find

Someone equipped with the right knowledge and a computer can try to check if a camera is connected to the WiFi network, like this person did:

Toot describing that a camera that was hidden inside a box

If this is your case, the following post provides a few guidelines to look for the cameras:

Finally, try to figure out the public IP address of the network you are on ( https://dshield.org/api/myip ) and either run a port scan from the outside to see if you find any odd open ports, or look it up in Shodan to see if Shodan found cameras on this IP in the past (but you likely will have a dynamic IP address).

InfoSec Handlers Diary Blog

This page even provides a script that you can execute to automatically do most steps explained on the above article.

However, sometimes you don’t bring your computer with you, which means you would have to rely on your smartphone to do this search. I’m still trying to find a good, trustworthy and intuitive app to recommend, since using nmap on Android will not help the less tech-savvy people.

Meanwhile, I hope the above links provide you with some ideas and useful tools to look for hidden cameras while you stay on a rented place.

Gustaf Erikson (gerikson)

July August 07, 2019 03:51 PM

August 05, 2019

eta (eta)

Presuming mental health issues considered harmful August 05, 2019 11:00 PM

This post isn’t at all technical, unlike the other ones. Here be dragons! Feel free to go and read something more sane if posts about non-technical interpersonal relations aren’t your cup of tea.

It’s generally acknowledged that talking to other people is a hard problem. Generally, though, in order to make it less of one, we have this amazing thing called ‘courtesy’ that gets wheeled out to deal with it, and make everything alright again. The point of being polite and courteous toward people, really, is to acknowledge that not everyone sees things the way you do, and so you need to account for that by giving everyone a bit of leeway. In a way, it shows respect for the other person’s way of thinking – being polite allows you to consider that your opinion may not be better than theirs. And, by and large, it works (except for the examples in that Wait But Why post there, but oh well).

I don’t want to talk about the situations where it works, though, because those are largely uninteresting! Where things get interesting (read: harmful) is when people, for one reason or another, decide that this respect for others is just not something they want to bother with.

Criticism

Of course, you don’t always agree with what other people are saying or doing; in fact, you might believe that, according to the way you see things, they’re being downright horrible! When this happens, there are usually a number of options available to you:

  1. Bring the issue up with the person (i.e. directly give them some constructive criticism)
  2. Tell your friends about the situation, and see what they think
  3. Start bad-mouthing the person when they’re not around

Option (1) might seem like the ‘best’ thing to do, but often it’s not; delivering criticism to people only works if they’re actually willing to receive it. Otherwise, all they hear is someone who thinks they know how to run their life better than them, which they perceive is disrespectful and somewhat rude – exactly the opposite of what you might have intended. Even if they don’t mind criticism, they may disagree with it and refuse to act on it, without getting angry at the fact that you delivered it, which is also fair enough.

Nonviolent Communication

When delivering criticism in this way, it’s also important to emphasise how your opinion is highly subjective; this is kind of what Nonviolent Communication (NVC) is trying to get at. Instead of saying things like:

You were very hurtful when you did X.

(which can come across as absolutism - you were very hurtful, according to some globally defined definition of ‘hurtful’), NVC advocates for statements along the lines of:

When you did X, I felt very hurt by that.

Here, this can’t possibly be disagreed with, as you’re only stating your subjective experience of what happened, instead of making a seemingly objective judgement - you’re not saying the person is bad, or that they necessarily “did anything wrong”; rather, you’re giving them feedback on what effect their actions have had, which they can use to be a better person in future. (Or not, if they don’t think your experience is worth caring about.)

Phoning a friend

Because option (1) is such a minefield sometimes – you have to try and phrase things the right way, avoid getting angry or upset yourself, and even then the person might hate you a bit for the criticism – most people opt for option (2) in everyday life, which is to discuss the problem with some friends.

This is distinct from option (3), in which you actively start spouting abuse about how terrible the person is; in (2), you’re trying to neutrally share your view with only a few friends of yours, with the expectation that they may well turn around and tell you that you’re actually the person at fault here. That is to say, the idea of (2) is not to presume that you’re in the right automatically, and for the people you tell to be blind yes-men who go along with it (!). Rather, the point is to get some crowd-sourced feedback on the problem at hand, in order to give you some perspective. It may even then lead to (1), or maybe one of your friends letting this person know, or whatever – the point is, option (2) is usually a pretty good solution, which is why it’s probably quite popular.

Giving up, or worse

I don’t need to say something like “option (3) [i.e. bad-mouthing other people] is bad”, because you probably knew that already. It’s also not as if every person in the world is a paragon of virtue who would never do anything so shocking, either; people share their negative judgements on other people quite widely all the time, and that’s just part of life. Obviously, it has interesting implications for your relationship with the person you’re bad-mouthing, but that’s your own problem – and is also something you’re unlikely to care about that much, or else you wouldn’t spread your opinion.

No, what I’m more interested in discussing is where this option turns more toxic than it usually is, which is usually helped by the context present in certain institutions nowadays.

Mental health, in 2019

In universities, schools, and other such educational institutions, a rather large focus has recently been placed on the idea of dealing with students’ mental health, for one reason or another. Reports like this 2018 NUS blog post about how Oxbridge may be ‘in decline’, due to how taxing going there can be for your mental health, abound; in many places, safeguarding children is the new buzzword. I’m not wishing to in any way dismiss or trivialize this focus; it’s arguably a good thing, and, at any rate, that’d be for another day and another blog post.

Rather, I want to discuss a rather interesting side-effect of this focus, and how it might interact with the things we’ve just been discussing in the rest of the post.

A not-entirely-fictitious example

Suppose someone has a problem with you. You might not necessarily have much of an idea as to why, or you might do; it doesn’t really matter. Clearly, they think option (1) (telling you about it directly) is right out; you have no way of doing whether or not they did (2), but you decide you don’t care that much. “They have a problem with me, but I don’t think that’s really anything I need to worry about; rather, I think they’re as responsible for this situation as I am.” In a normal situation, this would be alright; everyone’s entitled to their own standpoint, after all, and people generally tend to respect one another.

However, this person really doesn’t like you. As part of that, they’re building their own internal list of things you’ve done that support this standpoint; everything you say, or do, that can be in some way interpreted as validating their view gets added, and their opinion of you gets worse and worse. Which would also be fine, as long as you didn’t actually need to rely on this person for anything important; they can sit over there with their extreme judgements, and it doesn’t really bother you at all: what are they going to do about it?

Well, but, remember the context I was talking about. In a lot of institutions, the management are constantly on the lookout for possible ‘mental health issues’, which isn’t a bad thing at all. However, this can give your fun new friend some extra leverage; all they need to do is find something that would seem to suggest you’re not 100% alright, or that you might be in need of some help. (Either that, or they cherry-pick examples of you supposedly being nasty, and claim that their mental health is being affected. For bonus points, one could even do both!)

Now, they can go and deliver a long spiel about how you’re clearly not right in the head, or something to that effect, and how it’s caused great problems for them. They don’t really have to worry that much about it being traced back to them, of course; usually in these kinds of systems people reporting problems are granted a degree of anonymity, in order to prevent people being put off from reporting anything at all.

And, if you haven’t been very carefully controlling everything you do or say to ensure none of it could possibly be used against you, you might now find yourself a bit stuck; how, after all, do you counter the allegations laid against you? You often don’t even know who raised them, and you’re lucky if you’re even told anything about what they actually are; rather, you now have to contend with the completely fabricated idea that there’s something wrong with you, which tends to be awfully sticky and hard to get rid of once people get wind of it.

Even though that’s not at all true, and never was.

Andrew Montalenti (amontalenti)

Parse.ly’s brand refresh August 05, 2019 04:59 PM

Here’s how Parse.ly’s original 2009 logo looked:

Parse.ly has some fun startup lore from its early days about how we “acquired” this logo. I wrote about this in a post entitled, “Parse.ly: brand hacking”:

Our first Parse.ly logo was designed as a trade for another domain I happened to own. It was the dormant domain for a film project for one of my friends, Josh Bernhard. I had registered it for him while we were both in college. […] It so happened that my friend had picked the name “Max Spector” for his film, and thus registered maxspector.com. The film never came to fruition, so the domain just gathered dust for awhile. But, Max Spector happened to be the name of a prominent San Francisco designer. And Max got in touch with me about buying the domain for his personal website. Acting opportunistically, I offered it in trade for a logo for Parse.ly. To my surprise, he agreed.

I still look back at the logo fondly, though, it being nearly a decade old at this point, it obviously has that dated “web 2.0 startup” feel.

A stroll down Memory Lane…

Every era of tech startups has design trends that influence it.

In 2009, I think our typeface was somehow influenced by two logos, WordPress and Twitter. That kind of makes sense, given how big these were in our community of tech/media at the time. The WordPress logo uses a smallcaps and serifs-heavy typeface to this day:

As for those heavy gradients, shadows, and emboss effects we had with the Parse.ly green colorings, I think that might have come from Twitter; see, for example, this edition of their 2009 logo:

Our simplified revised logo

The first revision to that original logo came in 2012, when we decided to simplify the logo coinciding with the commercial launch of Parse.ly Analytics. That resulted in us simplifying from a complex 7-leaf icon to a much simpler 3-leaf one. It also had us remove “circa 2009 logo flourishes” like typeface coloring with gradient and shadow.

We preserved the custom typeface that Max had designed for us, and coupled it with the simpler icon.

We still had a touch of green gradient in here, but only in the icon itself.

Further refinement

When we hired our head of design, who still works at Parse.ly to this day, we further simplified the logo by removing all gradient effects from the icon, and choosing a friendlier green color which became part of our standard style and palette.

That served us well for several more years. I think this also matched the evolving maturity of tech company brands.

Redesigning Parse.ly’s logotype

This year, Parse.ly is celebrating another year of sustainable growth atop customer revenue. The world of 2019 is very different than the world of 2009, when our logo was originally designed.

In particular, the extreme serifs, small caps style, and washed-out black coloring of our original logotype lacks confidence and affability, despite our brand becoming, over the years, quite confident and affable.

As a result, we’ve decided to give our logo and brand a refresh.

Here it is:

And here is a little style guide snippet to show it in various forms and colors:

What I love about this evolution is that it represents a big change: we have completely thrown away our old logotype. The new type is more open, more friendly, and more confident. The parts I particularly like include the open “P” and “e”, the tight kerning, and the distinctive dot. The color of the logotype is confident and matches the Parse.ly green well.

The icon looks the same, but it, too, evolved slightly.

Click/watch the video below to see the change between the old and new logo:

See if you can spot the difference.

Evolving the Parse.ly icon

For the icon, the changes were subtle and all about continuous refinement.

Specifically: we made the little “leaf slits” a bit more open, so that the icon appears more textured when scaled down to low resolution (e.g. app icons, site favicons, and so on). We also changed the proportion of the icon slightly to better balance with the new type, and tightened the location of the dot.

Those changes are so subtle, you need to blink to notice them. I’ve helped you out by making this video: click/watch this one to see the subtle change to the leaf icon:

I’m really pleased with these changes and I want to thank Sven Kils, the graphic designer in Frankfurt, Germany, who helped us with the brand refresh.

Like all other things at Parse.ly, our brand is a process of perpetual simplification.

This year was special for me because Parse.ly crossed over several important company milestones. This included a 10-year company anniversary (how on earth did we make it this long and this far?), a new office for our staff in NYC (we’re in Chelsea now, instead of Midtown East); and, a new bigger and more ambitious go-to-market strategy, coinciding with the hiring of our excellent Chief Revenue Officer, Nick.

This also coincided with my own personal return to New York City, which is where Parse.ly began before my relocation to Charlottesville, VA for eight years. I’ve come home to where it all began, and just as it’s starting to get really good.

We also have an upcoming website redesign, market re-positioning, and supporting materials (such as explainer videos), which I’m quite excited about.

I’ll give a little preview of our new market positioning here:

We believe the most successful companies in the digital world are the ones with the best content.

At Parse.ly, we built the world’s most innovative content analytics system, which drove the growth of the web’s biggest sites. We took the expertise and data from that success, and we’ve made it available to every company.

Today, every company is a content company. Every marketer is a content marketer. You need to create and measure content like the pros in order to win an audience.

Imagine you had the audience scale of The Wall Street Journal. Or, the visitor loyalty of the NFL. Or, the branding of Bloomberg. What would that do for your business? These companies used Parse.ly to build a content strategy that creates loyalty, engages visitors, and converts them into revenue.

Leading brands like HelloFresh, TheKnot, PolicyGenius, and Harvard University have also recognized the power of content analytics to transform their marketing efforts, by growing their share of traffic and attention.

We have entered the era of data-informed content. And in this era, the web’s best publishers and brands rely on Parse.ly data to grow a loyal and engaged audience for their content. Join them.

I hope we get to start using this logo more widely in our materials as we pursue a web/collateral update through the next couple of months. Here’s to the next 10 years of continuous improvement!

Pete Corey (petecorey)

Embedding React Components in Jekyll Posts August 05, 2019 12:00 AM

Last week I published an article on “clipping convex hulls”. The article included several randomly generated examples. Every time you refresh the article, you’re given a new set examples based on a new set of randomly generated points.

I thought we could dive into how I used React to generate each of those examples, and how I embedded those React components into a Jekyll-generated static page.

Creating Our Examples

To show off the process, let’s create three React components and embed them into this very article (things are about to get meta). We’ll start by creating a new React project using create-react-app:

create-react-app examples
cd examples
yarn start

This examples project will hold all three examples that we’ll eventually embed into our Jekyll-generated article.

The first thing we’ll want to do in our examples project is to edit public/index.html and replace the root div with three new divs, one, two, and three: one to hold each of our examples:


<body>
  <noscript>You need to enable JavaScript to run this app.</noscript>
  <div id="one"></div>
  <div id="two"></div>
  <div id="three"></div>
</body>

Next, we’ll need to instruct React to render something into each of these divs. We can do that by editing our src/index.js file and replacing its contents with this:


import React from 'react';
import ReactDOM from 'react-dom';
import "./index.css";

ReactDOM.render(<One />, document.getElementById('one'));
ReactDOM.render(<Two />, document.getElementById('two'));
ReactDOM.render(<Three />, document.getElementById('three'));

We’ll need to define our One, Two, and Three components. Let’s do that just below our imports. We’ll simply render different colored divs for each of our three examples:


const One = () => (
  <div style=&lbrace{ height: "100%", backgroundColor: "#D7B49E" }} />
);

const Two = () => (
  <div style=&lbrace{ height: "100%", backgroundColor: "#DC602E" }} />
);

const Three = () => (
  <div style=&lbrace{ height: "100%", backgroundColor: "#BC412B" }} />
);

We’re currently telling each of our example components to fill one hundred percent of their parents’ height. Unfortunately, without any additional information, these heights will default to zero. Let’s update our index.css file to set some working heights for each of our example divs:


html, body {
  height: 100%;
  margin: 0;
}

#one, #two, #three {
  height: 33.33%;
}

If we run our examples React application, we’ll see each of our colored example divs fill approximately one third of the vertical height of the browser.

Embedding Our Examples

So now that we’ve generated our example React components, how do we embed them into our Jekyll post?

Before we start embedding our React examples into a Jekyll post, we need a Jekyll post to embed them into. Let’s start by creating a new post in the _posts folder of our Jekyll blog, called 2019-08-05-embedding-react-components-in-jekyll-posts.markdown:


---
layout: post
title:  "Embedding React Components in Jekyll Posts"
---

Last week I published an article on "clipping convex hulls"...

Great. Now that we have a post, we need to decide where our examples will go. Within our post, we need to insert three divs with identifiers of one, two, three, to match the divs in our examples React project.

Let’s place one here:

Another here:

And the third just below the following code block that shows off how these divs would appear in our post:


...

Let's place one here:

<div id="one" style="height: 4em;"></div>

Another here:

<div id="two" style="height: 4em;"></div>

...

<div id="three" style="float: right; height: 4em; margin: 0 0 0 1em; width: 4em;"></div>

...

Notice that we’re explicitly setting the heights of our example components, just like we did in our React project. This time, however, we’re setting their heights to 4em, rather than one third of their parents’ height. Also notice that the third example is floated right with a width of 4em. Because our React components conform to their parents properties, we’re free to size and position them however we want within our Jekyll post.

At this point, the divs in our post are empty placeholders. We need to embed our React components into the post in order for them to be filled.

Let’s go into our examples React project and build the project:


yarn build

This creates a build folder within our examples project. The build folder contains the compiled, minified, standalone version of our React application that we’re free to deploy as a static application anywhere on the web.

You’ll notice that the build folder contains everything needed to deploy our application, including an index.html file, a favicon.ico, our bundled CSS, and more. Our Jekyll project already provides all of this groundwork, so we’ll only be needing a small portion of our new build bundle.

Specifically, we want the Javascript files dumped into build/static/js. We’ll copy the contents of build/static/js from our examples React project into a folder called js/2019-08-05-embedding-react-components-in-jekyll-posts in our Jekyll project.

We should now have three Javascript files and three source map files being served statically by our Jekyll project. The final piece of the embedding puzzle is to include the three scripts at the bottom of our Jekyll post:


<script src="/js/2019-08-05-embedding-react-components-in-jekyll-posts/runtime~main.a8a9905a.js"></script>
<script src="/js/2019-08-05-embedding-react-components-in-jekyll-posts/2.b8c4cbcf.chunk.js"></script>
<script src="/js/2019-08-05-embedding-react-components-in-jekyll-posts/main.2d35bbcc.chunk.js"></script>

Including these scripts executes our React application, which looks for each of our example divs, one, two, and three on the current page and renders our example components into them. Now, when we view our post, we’ll see each of our example components rendered in their full, colorful glory!

August 04, 2019

Ponylang (SeanTAllen)

Last Week in Pony - August 4, 2019 August 04, 2019 08:42 AM

Last Week In Pony is a weekly blog post to catch you up on the latest news for the Pony programming language. To learn more about Pony check out our website, our Twitter account @ponylang, or our Zulip community.

Got something you think should be featured? There’s a GitHub issue for that! Add a comment to the open “Last Week in Pony” issue.

August 03, 2019

Alex Wilson (mrwilson)

Notes from the Week #28 August 03, 2019 12:00 AM

Notes from the Week #28

Here’s a non-exhaustive summary of what happened this week.

One

The work I mentioned last week to run our smoke-tests from a different data-centre is effectively complete.

We’re running the new system in parallel with the existing system of checks for a bit to ensure that they behave the same before switching off the existing system.

I’m consistently impressed by the thought that’s clearly gone into Concourse CI’s domain concepts and API.

Two

I’ve almost finished the write up for the “Mental Health in Software Development” session that I ran last month.

It turns out that writing up an hour and a half of post-its into a structured blogpost takes a while, but I’m pleased with how it’s coming together.

I’ve had a couple of realisations during the reflection on the session, especially around intersectionality with regards to mental health and other kinds of diversity, but more on that in the post.

Three

Wednesday was a bit of a rough day — for whatever reason, I was off my game and wasn’t as well equipped as I would have liked to contribute during a 2hour mapping session.

I was annoyed at myself about this for a while, but going for a brief walk helped clear my head. In future I need to be better about just stepping out of the room if I need some air, much as I would be okay with it if someone else asked to.

Four

I have to sign some of my commits at work to verify that they’ve come from me. It’s been a while since I’ve needed to do this, and it’s got me thinking about a problem I ran into while pairing/mobbing.

git supports signing commits and tags, but with a single key. How can you verify (a) that a single commit was paired/mobbed on and (b) the people whose names are on the commit were actually involved?

I’m playing with a rough idea of “primary committer signs the commit, secondary committers create and sign unique tags”, with something like a pre-receive-hook on the git server, which has a list of allowed public keys and rejects pushes where there are <2 verified authors.

The disadvantage of this is that tags are deliberately lightweight and easy to create, and so just as easy to delete. I suppose that this could be enforced on the server-side too (forbid deletions of tags).

The tag itself would have to have a unique id, something that uses the short-hash of the commit like a8b7c6-co-author-1. If tags were being used for release management, this might "pollute" your tag history to an unacceptable extent.

However, these might be acceptable trade-offs for being able to verify that everyone who claims to have worked on the commit did actually do so (assuming decent protections of each person’s signing key).

Originally published at https://blog.probablyfine.co.uk on August 3, 2019.

July 30, 2019

Gustaf Erikson (gerikson)

Fotografiska, July 2019 July 30, 2019 03:41 PM

Mandy Barker - Sea of Artifacts

An earnest exploration of plastic in seas and waterways. For some reason this exhibit was in a room even more dimly lit than usual at Fotografiska - and that’s saying something. Sadly nothing that’s not been said or done many times before.

Scarlett Hooft Graafland - Vanishing Traces

This grew on me. Staged images in different environments with surreal components (yet of course it’s all “analog” and there’s no manipulation). Impressive for the visuals and conceptions alone.

Vincent Peters - Light Within

Peters photographs our modern Hollywood idols like they where 1940s Hollywood idols.

James Nachtwey - Memoria

A visual onslaught of images from famines, wars, catastrophes and field hospitals. Almost unbearable in its intensity. I became numb after just a few walls. It depresses me that future epigones will have no shortage of similar subjects, all without having to travel that far.

July 29, 2019

Derek Jones (derek-jones)

First language taught to undergraduates in the 1990s July 29, 2019 05:44 PM

The average new graduate is likely to do more programming during the first month of a software engineering job, than they did during a year as an undergraduate. Programming courses for undergraduates is really about filtering out those who cannot code.

Long, long ago, when I had some connection to undergraduate hiring, around 70-80% of those interviewed for a programming job could not write a simple 10-20 line program; I’m told that this is still true today. Fluency in any language (computer or human) takes practice, and the typical undergraduate gets very little practice (there is no reason why they should, there are lots of activities on offer to students and programming fluency is not needed to get a degree).

There is lots of academic discussion around which language students should learn first, and what languages they should be exposed to. I have always been baffled by the idea that there was much to be gained by spending time teaching students multiple languages, when most of them barely grasp the primary course language. When I was at school the idea behind the trendy new maths curriculum was to teach concepts, rather than rote learning (such as algebra; yes, rote learning of the rules of algebra); the concept of number-base was considered to be a worthwhile concept and us kids were taught this concept by having the class convert values back and forth, such as base-10 numbers to base-5 (base-2 was rarely used in examples). Those of us who were good at maths instantly figured it out, while everybody else was completely confused (including some teachers).

My view is that there is no major teaching/learning impact on the choice of first language; it is all about academic fashion and marketing to students. Those who have the ability to program will just pick it up, and everybody else will flounder and do their best to stay away from it.

Richard Reid was interested in knowing which languages were being used to teach introductory programming to computer science and information systems majors. Starting in 1992, he contacted universities roughly twice a year, asking about the language(s) used to teach introductory programming. The Reid list (as it became known), was regularly updated until Reid retired in 1999 (the average number of universities included in the list was over 400); one of Reid’s ex-students, Frances VanScoy, took over until 2006.

The plot below is from 1992 to 2002, and shows languages in the top with more than 3% usage in any year (code+data):

Normalised returned required for various elapsed years.

Looking at the list again reminded me how widespread Pascal was as a teaching language. Modula-2 was the language that Niklaus Wirth designed as the successor of Pascal, and Ada was intended to be the grown up Pascal.

While there is plenty of discussion about which language to teach first, doing this teaching is a low status activity (there is more fun to be had with the material taught to the final year students). One consequence is lack of any real incentive for spending time changing the course (e.g., using a new language). The Open University continued teaching Pascal for years, because material had been printed and had to be used up.

C++ took a while to take-off because of its association with C (which was very out of fashion in academia), and Java was still too new to risk exposing to impressionable first-years.

A count of the total number of languages listed, between 1992 and 2002, contains a few that might not be familiar to readers.

          Ada    Ada/Pascal          Beta          Blue             C 
         1087             1            10             3           667 
       C/Java      C/Scheme           C++    C++/Pascal        Eiffel 
            1             1           910             1            29 
      Fortran       Haskell     HyperTalk         ISETL       ISETL/C 
          133            12             2            30             1 
         Java  Java/Haskell       Miranda            ML       ML/Java 
          107             1            48            16             1 
     Modula-2      Modula-3        Oberon      Oberon-2     ObjPascal 
          727            24            26             7            22 
       Orwell        Pascal      Pascal/C        Prolog        Scheme 
           12          2269             1            12           752 
    Scheme/ML Scheme/Turing        Simula     Smalltalk           SML 
            1             1            14            33            88 
       Turing  Visual-Basic 
           71             3 

I had never heard of Orwell, a vanity language foisted on Oxford Mathematics and Computation students. It used to be common for someone in computing departments to foist their vanity language on students; it enabled them to claim the language was being used and stoked their ego. Is there some law that enables students to sue for damages?

The 1990s was still in the shadow of the 1980s fashion for functional programming (which came back into fashion a few years ago). Miranda was an attempt to commercialize a functional language compiler, with Haskell being an open source reaction.

I was surprised that Turing was so widely taught. More to do with the stature of where it came from (university of Toronto), than anything else.

Fortran was my first language, and is still widely used where high performance floating-point is required.

ISETL is a very interesting language from the 1960s that never really attracted much attention outside of New York. I suspect that Blue is BlueJ, a Java IDE targeting novices.

Jeremy Morgan (JeremyMorgan)

How to Nail Your Next Coding Interview July 29, 2019 04:45 PM

The room is silent except for the buzzing of the fluorescent lights. The judges across the table are staring at you, expressionless. Some have pen and paper, some don't. They're all staring at you. Your mouth is so dry it feels like you've been eating sawdust all day. You grab the marker and head for the whiteboard. One judge is staring at a laptop. It's time to show them a quicksort.

I've been on both sides of the coding interview. I've interviewed for jobs, sometimes I got the offer, sometimes I didn't. Sometimes I nailed the whiteboard tests and didn't get a call, and vice versa. I've interviewed probably hundreds of people in my career and I did my best to make candidates feel comfortable, but many managers won't. They will try to trip you up, make you choke. We can argue about the effectiveness of a whiteboard interview later, but they will happen.

Let's look at some tips to rock your next coding interview. You can set yourself up for success and nail it. Don't be mistaken, this isn't a set of "hacks", "tricks", or some kind of brain dump. Don't use sneaky tactics or tricks to crack your way into a position you aren't qualified for. They'll just fire you later. Follow this path and you will nail the interview because you'll be a better developer.

What They're Looking for

Here's what most of the interviewers will be looking for, in no particular order:

  • Problem Solving - How well can you solve problems, and more importantly: what is your process. Many tests will look for this. 

  • Coding Skills - You'll need to be ruthlessly good here. I've handed people laptops and told them to write something for me. You can tell how much actual coding people have done by watching them do it. 

  • Technical Knowledge - This is where the trivia questions come in, but your interviewer wants to know just how technical you are. 

  • Experience - This is where you talk about your past projects. Interviewers want to hear your war stories, and what you learned from them. 

  • Culture Fit - I can't help you much with this one. This is where they see how you'll fit into a team. There are ways you can improve in this area..

So what do you need to do to get prepared? Let's dive in. 

Stage 1: Laying the Foundation

So here's what you need to be doing way before any interview. Days, months or weeks before you need to build some core skills. 

Learn Computer Science Basics

You need to get the basics of Computer Science down. You don't have to be Donald Knuth here, but you need to know the theory, language, and idioms. This is the bare minimum for an interview. If an interviewer starts casually mentioning a binary tree in the interview you better know what they're talking about. 

  1. Review The Basics of Computer Science Tutorial - some of this may be pretty basic to you but it gives you a framework of things to pay attention to. Where do your strengths lie? If you find any areas where you are weak at, work to build your knowledge and skills in that area. 

  2. Review Teach Yourself Computer Science - Again we're talking basics foundational stuff that you may be lacking in. You need to know the basics and speak the language. 

  3. You can even Take a course on Computer Science 101 from Stanford, for free.

Learn Different Algorithms

Algorithms run the world and if you're a developer you'll need to know them. So how do you get good at algorithms? It's not black magic or a secret art. 

According to Geeks for Geeks these are the top 10 algorithms in interview questions:

  • Graph
  • Linked List
  • Dynamic Programming
  • Sorting and Searching
  • Tree / Binary Search Tree
  • Number Theory
  • BIT Manipulation
  • String / Array 

It seems pretty accurate to me. It's super helpful to know this. What are you great at in this list? What are you weak at? This link is excellent for getting a high-level view of each and some examples. 

These courses will help you absolutely dominate in this area:

In about 6 hours you'll be able to understand and really talk the talk when it comes to algorithms. 

Action: Study this stuff. Learn it. Know it.

Stage 2: Practice Practice Practice

Here's another thing you need to make a part of your routine: practice. You need to practice this stuff a few times a week or more to really get good. 

The more you practice the better you'll do in any whiteboard situation. Most of the time they ask for pseudocode but if you really put your practice in you can write real compilable code on a whiteboard without blinking. 

  1. Check out the HackerRank Interview Preparation Kit - This is where the rubber meets the road with all that algorithm knowledge you now have. Put it to work with real examples. 

  2. Start participating in HackerRank Challenges - This is how you can really apply your knowledge in different areas. This is the ultimate whiteboard practice area.      
    It's broken down like this:

    Do an exercise a day when you can. Work your way through many of the challenges. If you just do a challenge a day for 30 days you can nail a whiteboard interview and will become a better coder. I promise you. 

  1. Sign up for LeetCode and start participating in challenges - Nothing sharpens your skills like competing with others. There are tons of challenges and fun stuff here. 

  2. Check out Project Euler and start writing code to solve the problems there. Project Euler is a set of math and programming problems that really challenge your problem-solving abilities. Use code to solve your problems here and your skills will grow. 

Action: Write some code. A lot of it. Get ruthlessly good at it.

Stage 3: Build Your Public Profile

You need to have work you've done online and accessible. Every recruiter, manager, or someone interested in you will Google you. Make sure they find your work. 

  1. Put your stuff Online and start a GitHub account if you don't already have one. Put all of your code up there. Your personal projects, stuff you write for tutorials, all of it.

  2. Create accounts on sites like JSFiddle and CodePen - This is mostly for web developers but it creates a good spot where people can find your work. 

  3. Create a Blog - It doesn't matter if it's WordPress, GitHub pages, or a custom hosted server setup like my blog you should create a blog that talks about what you're working on, what you're learning, and what you can teach. 

Note: I have thrown away resumes of people when I couldn't find their GitHub or any public work. This shows that they aren't enthusiastic and managers who look for passionate people will look for what you've posted publicly. Put your work out there no matter how good it is. 

Action: Sign up and start making your code and projects public.

Stage 4: Prepare for Interviews

Preparation ultimately determines your success. Nobody succeeds without preparation. Here's how you can prepare for your interview and kick the ball through the uprights. 

  1. Pick up a copy of Cracking the Code Interview - This book is the bible of coding interviews. Like this article it doesn't tell you how to cheat or shortcut your way into getting hired - it gives you a great framework to upgrade your skills. Practice questions are included and it gives you all the tools you need to really sharpen your skillset. 

  2. Take this Course on preparing for a job interview - This gives you the information you need to really prepare yourself. 

    This course covers:

    • Job Interview Basics
    • Algorithm Based Questions
    • Typical Questions
    • Computer Science Questions
    • Getting Experience  It really covers what you need to know to succeed in 2 1/2 hours. Well worth it. 
  3. Watch this video about how to prepare yourself for Developer Job Interviews - This is created by John Sonmez, who you may already know. There's no question that John is a successful developer and he's more than willing to share what has worked for him. It's well worth checking out. 

  4. Exercise - Ok I know this will sound silly, but here's something that will give you an extra edge. No matter what time of day your interview is, hit the gym or do some cardio an hour or two before the interview. This will ensure:

    • You are refreshed and energetic
    • You have oxygen flowing through your blood
    • Your muscles will be relaxed

    A good hard workout will make sure you are charged and ready to go for your interview. You don't want to seem tired or lethargic in your interview. You want to be at your physical and mental BEST. 

Preparation is everything. The more you prepare the better you'll feel on the interview day.

Action: Start training like a boxer. Get fight ready.

Conclusion

Coding interviews can be brutal. You can take the sting off them by doing the following:

  • Learning 
  • Sharpening your skills
  • Practicing

These things will ensure your success. Trust me, after working on crazy problems in LeetCode and HackerRank the whiteboard tests are so much easier. If you do this alone, you'll be successful. 

If you have comments feel free to share or yell at me on Twitter and we can discuss it. 

Sevan Janiyan (sevan)

Something blogged (on pkgsrcCon 2019) July 29, 2019 02:55 PM

pkgsrcCon 2019 was held in Cambridge this year. The routine as usual was social on the Friday evening, day of talks on the Saturday, hacking on things on the Sunday.Armed with a bag of bits to record the talks, I headed up to Cambridge on Friday evening to meet up with sborrill before heading to …

Pete Corey (petecorey)

Clipping Convex Hulls with Thi.ng! July 29, 2019 12:00 AM

I recently found myself knee-deep in a sea of points and polygons. Specifically, I found myself with two polygons represented by two random sets of two dimensional points. I wanted to convert each set of points into a convex polygon, or a convex hull, and find the overlapping area between the two.

After doing some research, I learned about the existence of a few algorithms that would help me on my quest:

  • Graham’s scan can be used to build a convex hull from a set of randomly arranged points in two dimensional space.
  • The Sutherland–Hodgman algorithm can be used to construct a new “clip polygon” that represents the area of overlap between any two convex polygons.
  • The shoelace formula can be used to compute the area of any simple polygon given its clockwise or counterclockwise sorted points.

My plan of attack is to take my two random sets of two dimensional points, run them through Graham’s scan to generate two convex hulls, computed a new “clip polygon” from those two polygons, and then use the shoelace formula to compute and compare the area of the clip polygon to the areas of the two original polygons.

Before diving in and building a poorly implemented, bug riddled implementation of each of these algorithms, I decided to see if the hard work had already been done for me. As it turns out, it had!

I discovered that the thi.ng project a , which is a set of computational design tools for Clojure and Clojurescript and also includes a smorgasbord of incredibly useful Javascript packages, contains exactly what I was looking for.

  • @thi.ng/geom-hull implements Graham’s scan.
  • @thi.ng/geom-clip implements the Sutherland–Hodgman clipping algorithm.
  • @thi.ng/geom-poly-util implements the shoelace formula.

Perfect!

Let’s use these libraries, along with HTML5 canvas, to build a small proof of concept. We’ll start by writing a function that generates a random set of two dimensional points:


const generatePoints = (points, width, height) =>
  _.chain(points)
    .range()
    .map(() => [
      width / 2 + (Math.random() * width - width / 2),
      height / 2 + (Math.random() * height - height / 2)
    ])
    .value();

Next, let’s use our new generatePoints function to generate two sets of random points and render them to a canvas (we’re glossing over the canvas creation process):


const drawPoints = (points, size, context) =>
  _.map(points, ([x, y]) => {
    context.beginPath();
    context.arc(x, y, size, 0, 2 * Math.PI);
    context.fill();
  });

let points1 = generatePoints(5, width * ratio, height * ratio);
let points2 = generatePoints(5, width * ratio, height * ratio);

context.fillStyle = "rgba(245, 93, 62, 1)";
drawPoints(points1, 10, context);

context.fillStyle = "rgba(118, 190, 208, 1)";
drawPoints(points2, 10, context);

The points on this page are generated randomly, once per page load. If you’d like to continue with a different set of points, refresh the page. Now we’ll use Graham’s scan to convert each set of points into a convex hull and render that onto our canvas:


import { grahamScan2 } from "@thi.ng/geom-hull";

const drawPolygon = (points, context) => {
  context.beginPath();
  context.moveTo(_.first(points)[0], _.first(points)[1]);
  _.map(points, ([x, y]) => {
    context.lineTo(x, y);
  });
  context.fill();
};

let hull1 = grahamScan2(points1);
let hull2 = grahamScan2(points2);

context.fillStyle = "rgba(245, 93, 62, 0.5)";
drawPolygon(hull1, context);

context.fillStyle = "rgba(118, 190, 208, 0.5)";
drawPolygon(hull2, context);

We can see that there’s an area of overlap between our two polygons (if not, refresh the page). Let’s use the Sutherland-Hodgman algorithm to construct a polygon that covers that area and render it’s outline to our canvas:


import { sutherlandHodgeman } from "@thi.ng/geom-clip";

let clip = sutherlandHodgeman(hull1, hull2);

context.strokeStyle = "rgba(102, 102, 102, 1)";
context.beginPath();
context.moveTo(_.first(clip)[0], _.first(clip)[1]);
_.map(clip, ([x, y]) => {
  context.lineTo(x, y);
});
context.lineTo(_.first(clip)[0], _.first(clip)[1]);
context.stroke();

Lastly, let’s calculate the area of our two initial convex hulls and the resulting area of overlap between then. We’ll render the area of each at the “center” of each polygon:


import { polyArea2 } from "@thi.ng/geom-poly-utils";

const midpoint = points =>
  _.chain(points)
    .reduce(([sx, sy], [x, y]) => [sx + x, sy + y])
    .thru(([x, y]) => [x / _.size(points), y / _.size(points)])
    .value();

const drawArea = (points, context) => {
  let [x, y] = midpoint(points);
  let area = Math.round(polyArea2(points));
  context.fillText(area, x, y);
};

context.fillStyle = "rgba(245, 93, 62, 0.5)";
drawArea(hull1, context);
context.fillStyle = "rgba(118, 190, 208, 0.5)";
drawArea(hull2, context);
context.fillStyle = "rgba(102, 102, 102, 1)";
drawArea(clip, context);

As you can see, with the right tools at our disposal, this potentially difficult task is a breeze. I’m incredibly happy that I discovered the thi.ng set of libraries when I did, and I can see myself reaching for them in the future.

Update: The Thi.ng creator put together a demo outlining a much more elegant way of approaching this problem. Be sure to check out their solution, and once again, check out thi.ng!

canvas { width: 100%; height: 100%; }

Dan Luu (dl)

What is RISC? July 29, 2019 12:00 AM

This is an archive of a series of comp.arch USENET posts by John Mashey in the early to mid 90s, on the defnition of reduced instruction set computer (RISC). Contrary to popular belief, RISC isn't about the number of instructions! This is archived here since, at least once a year, I see someone argue that RISC is obsolete or outdated because their understanding of RISC comes from the name, not from what RISC actually is. This is arguably a sign that RISC is very poorly named, but that's a separate topic.

PART I - ARCHITECTURE, IMPLEMENTATION, DIFFERENCES

WARNING: you may want to print this one to read it... (from preceding discussion):

Anyway, it is not a fair comparison. Not by a long stretch. Let's see how the Nth generation SPARC, MIPS, and 88K's do (assuming they last) compared to some new design from scratch.

Well, there is baggage and there is BAGGAGE. One must be careful to distinguish between ARCHITECTURE and IMPLEMENTATION:

a) Architectures persist longer than implementations, especially user-level Instruction-Set Architecture.
b) The first member of an architecture family is usually designed with the current implementation constraints in mind, and if you're lucky, software people had some input.
c) If you're really lucky, you anticipate 5-10 years of technology trends, and that modifies your idea of the ISA you commit to.
d) It's pretty hard to delete anything from an ISA, except where:
     1) You can find that NO ONE uses a feature (the 68020->68030 deletions mentioned by someone else).
     2) You believe that you can trap and emulate the feature "fast enough", e.g., microVAX support for decimal ops, 68040 support for transcendentals.

Now, one might claim that the i486 and 68040 are RISC implementations of CISC architectures ... and I think there is some truth to this, but I also think that it can confuse things badly:

Anyone who has studied the history of computer design knows that high-performance designs have used many of the same techniques for years, for all of the natural reasons, that is:

a) They use as much pipelining as they can, in some cases, if this means a high gate-count, then so be it.
b) They use caches (separate I & D if convenient).
c) They use hardware, not micro-code for the simpler operations.

(For instance, look at the evolution of the S/360 products. Recall that the 360 /85 used caches, back around 1969, and within a few years, so did any mainframe or supermini.)

So, what difference is there among machines if similar implementation ideas are used?

A: there is a very specific set of characteristics shared by most machines labeled RISCs, most of which are not shared by most CISCs.

The RISC characteristics:

a) Are aimed at more performance from current compiler technology (e.g., enough registers).
OR
b) Are aimed at fast pipelining in a virtual-memory environment with the ability to still survive exceptions without inextricably increasing the number of gate delays (notice that I say gate delays, NOT just how many gates).

Even though various RISCs have made various decisions, most of them have been very careful to omit those things that CPU designers have found difficult and/or expensive to implement, and especially, things that are painful, for relatively little gain.

I would claim, that even as RISCs evolve, they may have certain baggage that they'd wish weren't there ... but not very much. In particular, there are a bunch of objective characteristics shared by RISC ARCHITECTURES that clearly distinguish them from CISC architectures.

I'll give a few examples, followed by the detailed analysis:

MOST RISCs:

3a) Have 1 size of instruction in an instruction stream
3b) And that size is 4 bytes
3c) Have a handful (1-4) addressing modes) (it is VERY hard to count these things; will discuss later).
3d) Have NO indirect addressing in any form (i.e., where you need one memory access to get the address of another operand in memory)
4a) Have NO operations that combine load/store with arithmetic, i.e., like add from memory, or add to memory. (note: this means especially avoiding operations that use the value of a load as input to an ALU operation, especially when that operation can cause an exception. Loads/stores with address modification can often be OK as they don't have some of the bad effects)
4b) Have no more than 1 memory-addressed operand per instruction
5a) Do NOT support arbitrary alignment of data for loads/stores
5b) Use an MMU for a data address no more than once per instruction
6a) Have >=5 bits per integer register specifier
6b) Have >= 4 bits per FP register specifier

These rules provide a rather distinct dividing line among architectures, and I think there are rather strong technical reasons for this, such that there is one more interesting attribute: almost every architecture whose first instance appeared on the market from 1986 onward obeys the rules above ... Note that I didn't say anything about counting the number of instructions...

So, here's a table:

C: number of years since first implementation sold in this family (or first thing which with this is binary compatible). Note: this table was first done in 1991, so year = 1991-(age in table).
3a: # instruction sizes
3b: maximum instruction size in bytes
3c: number of distinct addressing modes for accessing data (not jumps). I didn't count register or literal, but only ones that referenced memory, and I counted different formats with different offset sizes separately. This was hard work...Also, even when a machine had different modes for register-relative and PC_relative addressing, I counted them only once.
3d: indirect addressing: 0: no, 1: yes
4a: load/store combined with arithmetic: 0: no, 1:yes
4b: maximum number of memory operands
5a: unaligned addressing of memory references allowed in load/store, without specific instructions
0: no never (MIPS, SPARC, etc)
1: sometimes (as in RS/6000)
2: just about any time
5b: maximum number of MMU uses for data operands in an instruction
6a: number of bits for integer register specifier
6b: number of bits for 64-bit or more FP register specifier, distinct from integer registers

Note that all of these are ARCHITECTURE issues, and it is usually quite difficult to either delete a feature (3a-5b) or increase the number of real registers (6a-6b) given an initial isntruction set design. (yes, register renaming can help, but...)

Now: items 3a, 3b, and 3c are an indication of the decode complexity 3d-5b hint at the ease or difficulty of pipelining, especially in the presence of virtual-memory requirements, and need to go fast while still taking exceptions sanely items 6a and 6b are more related to ability to take good advantage of current compilers.

There are some other attributes that can be useful, but I couldn't imagine how to create metrics for them without being very subjective; for example "degree of sequential decode", "number of writebacks that you might want to do in the middle of an instruction, but can't, because you have to wait to make sure you see all of the instruction before committing any state, because the last part might cause a page fault," or "irregularity/assymetricness of register use", or "irregularity/complexity of instruction formats". I'd love to use those, but just don't know how to measure them. Also, I'd be happy to hear corrections for some of these.

So, here's a table of 12 implementations of various architectures, one per architecture, with the attributes above. Just for fun, I'm going to leave the architectures coded at first, although I'll identify them later. I'm going to draw a line between H1 and L4 (obviously, the RISC-CISC Line), and also, at the head of each column, I'm going to put a rule, which, in that column, most of the RISCs obey. Any RISC that does not obey it is marked with a +; any CISC that DOES obey it is marked with a *. So...

1991
CPU        Age        3a 3b 3c 3d      4a 4b 5a 5b        6a 6b     # ODD
RULE        6        =1 =4 5 =0      =0 =1 2 =1        >4 >3
-------------------------------------------------------------------------
A1        4         1  4  1  0         0  1  0  1         8  3+      1
B1        5         1  4  1  0         0  1  0  1         5  4       -
C1        2         1  4  2  0         0  1  0  1         5  4       -
D1        2         1  4  3  0         0  1  0  1         5  0+      1
E1        5         1  4 10+ 0         0  1  0  1         5  4       1
F1        5         2+ 4  1  0         0  1  0  1         4+ 3+      3
G1        1         1  4  4  0         0  1  1  1         5  5       -
H1        2         1  4  4  0         0  1  0  1         5  4       -   RISC
---------------------------------------------------------------
L4        26         4  8  2* 0*       1  2  2  4         4  2       2   CISC
M2        12        12 12 15  0*       1  2  2  4         3  3       1
N1        10        21 21 23  1        1  2  2  4         3  3       -
O3        11        11 22 44  1        1  2  2  8         4  3       -
P3        13        56 56 22  1        1  6  2 24         4  0       -

An interesting exercise is to analyze the ODD cases.

First, observe that of 12 architectures, in only 2 cases does an architecture have an attribute that puts it on the wrong side of the line. Of the RISCs:

A1 is slightly unusual in having more integer registers, and less FP than usual. [Actually, slightly out of date, 29050 is different, using integer register bank instead, I hear.]
D1 is unusual in sharing integer and FP registers (that's what the D1:6b == 0).
E1 seems odd in having a large number of address modes. I think most of this is an artifact of the way that I counted, as this architecture really only has a fundamentally small number of ways to create addresses, but has several different-sized offsets and combinations, but all within 1 4-byte instruction; I believe that it's addressing mechanisms are fundamentally MUCH simpler than, for example, M2, or especially N1, O3, or P3, but the specific number doesn't capture it very well.
F1 .... is not sold any more.
H1 one might argue that this process has 2 sizes of instructions, but I'd observe that at any point in the instruction stream, the instructions are either 4-bytes long, or 8-bytes long, with the setting done by a mode bit, i.e., not dynamically encoded in every instruction.

Of the processors called CISCs:

L4 happens to be one in which you can tell the length of the instruction from the first few bits, has a fairly regular instruction decode, has relatively few addressing modes, no indirect addressing. In fact, a big subset of its instructions are actually fairly RISC-like, although another subset is very CISCy.
M2 has a myriad of instruction formats, but fortunately avoided indirect addressing, and actually, MOST of instructions only have 1 address, except for a small set of string operations with 2. I.e., in this case, the decode complexity may be high, but most instructions cannot turn into multiple-memory-address-with-side-effects things.
N1,O3, and P3 are actually fairly clean, orthogonal architectures, in which most operations can consistently have operands in either memory or registers, and there are relatively few weirdnesses of special-cased uses of registers. Unfortunately, they also have indirect addressing, instruction formats whose very orthogonality almost guarantees sequential decoding, where it's hard to even know how long an instruction is until you parse each piece, and that may have side-effects where you'd like to do a register write-back early, but either: must wait until you see all of the instruction until you commit state or must have "undo" shadow-registers or must use instruction-continuation with fairly tricky exception handling to restore the state of the machine

It is also interesting to note that the original member of the family to which O3 belongs was rather simpler in some of the critical areas, with only 5 instruction sizes, of maximum size 10 bytes, and no indirect addressing, and requiring alignment (i.e., it was a much more RISC-like design, and it would be a fascinating speculation to know if that extra complexity was useful in practice).

Now, here's the table again, with the labels:

1991
CPU        Age        3a 3b 3c 3d     4a 4b 5a 5b        6a 6b  # ODD
RULE        6        =1 =4 5 =0     =0 =1 2 =1        >4 >3
-------------------------------------------------------------------------
A1        4         1  4  1  0         0  1  0  1         8  3+  1        AMD 29K
B1        5         1  4  1  0         0  1  0  1         5  4   -        R2000
C1        2         1  4  2  0         0  1  0  1         5  4   -        SPARC
D1        2         1  4  3  0         0  1  0  1         5  0+  1        MC88000
E1        5         1  4 10+ 0         0  1  0  1         5  4   1        HP PA
F1        5         2+ 4  1  0         0  1  0  1         4+ 3+  3        IBM RT/PC
G1        1         1  4  4  0         0  1  1  1         5  5   -        IBM RS/6000
H1        2         1  4  4  0         0  1  0  1         5  4   -        Intel i860
---------------------------------------------------------------
L4        26         4  8  2* 0*       1  2  2  4         4  2   2        IBM 3090
M2        12        12 12 15  0*       1  2  2  4         3  3   1        Intel i486
N1        10        21 21 23  1        1  2  2  4         3  3   -        NSC 32016
O3        11        11 22 44  1        1  2  2  8         4  3   -        MC 68040
P3        13        56 56 22  1        1  6  2 24         4  0   -        VAX

General comment: this may sound weird, but in the long term, it might be easier to deal with a really complicated bunch of instruction formats, than with a complex set of addressing modes, because at least the former is more amenable to pre-decoding into a cache of decoded instructions that can be pipelined reasonably, whereas the pipeline on the latter can get very tricky (examples to follow). This can lead to the funny effect that a relatively "clean", orthogonal archiecture may actually be harder to make run fast than one that is less clean. Obviously, every weirdness has it's penalties.... But consider the fundamental difficulty of pipelining something like (on a VAX):

ADDL        @(R1)+,@(R1)+,@(R2)+

(something that, might theoretically arise from:

register **r1, **r2;
**r2++ = **r1++ + **r1++;

Now, consider what the VAX has to do:

1) Decode the opcode (ADD)
2) Fetch first operand specifier from I-stream and work on it.
  a) Compute the memory address from (r1)
    If aligned
      run through MMU
        if MMU miss, fixup
      access cache
        if cache miss, do write-back/refill
    Elseif unaligned
      run through MMU for first part of data
        if MMU miss, fixup
      access cache for that part of data
        if cache miss, do write-back/refill
      run through MMU for second part of data
        if MMU miss, fixup
      access cache for second part of data
        if cache miss, do write-back/refill
    Now, in either case, we now have a longword that has the address of the actual data.
  b) Increment r1 [well, this is where you'd LIKE to do it, or in parallel with step 2a).] However, see later why not...
  c) Now, fetch the actual data from memory, using the address just obtained, doing everything in step 2a) again, yielding the actual data, which we need to stick in a temporary buffer, since it doesn't actually go in a register.
3) Now, decode the second operand specifier, which goes thru everything that we did in step 2, only again, and leaves the results in a second temporary buffer. Note that we'd like to be starting this before we get done with all of 2 (and I THINK the VAX9000 probably does that??) but you have to be careful to bypass/interlock on potential side-effects to registers .... actually, you may well have to keep shadow copies of every register that might get written in the instruction, since every operand can use auto-increment/decrement. You'd probably want badly to try to compute the address of the second argument and do the MMU access interleaved with the memory access of the first, although the ability of any operand to need 2-4 MMU accesses probably makes this tricky. [Recall that any MMU access may well cause a page fault....]
4) Now, do the add. [could cause exception]
5) Now, do the third specifier .... only, it might be a little different, depending on the nature of the cache, that is, you cannot modify cache or memory, unless you know it will complete. (Why? well, suppose that the location you are storing into overlaps with one of the indirect-addressing words pointed to by r1 or 4(r1), and suppose that the store was unaligned, and suppose that the last byte of the store crossed a page boundary and caused a page fault, and that you'd already written the first 3 bytes. If you did this straightforwardly, and then tried to restart the instruction, it wouldn't do the same thing the second time.
6) When you're sure all is well, and the store is on its way, then you can safely update the two registers, but you'd better wait until the end, or else, keep copies of any modified registers until you're sure it's safe. (I think both have been done ??)
7) You may say that this code is unlikely, but it is legal, so the CPU must do it. This style has the following effects:   a) You have to worry about unlikely cases.
  b) You'd like to do the work, with predictable uses of functional units, but instead, they can make unpredictable demands.
  c) You'd like to minimize the amount of buffering and state, but it costs you in both to go fast.   d) Simple pipelining is very, very tough: for example, it is pretty hard to do much about the next instruction following the ADDL, (except some early decode, perhaps), without a lot of gates for special-casing. (I've always been amazed that CVAX chips are fast as they are, and VAX 9000s are REALLY impressive...)
  e) EVERY memory operand can potentially cause 4 MMU uses, and hence 4 MMU faults that might actually be page faults...
  f) AND there are even worse cases, like the addp6 instruction, that can require 40 pages to be resident to complete...
8) Consider how "lazy" RISC designers can be:
  a) Every load/store uses exactly 1 MMU access.
  b) The compilers are often free to re-arrange the order, even across what would have been the next instruction on a CISC. This gets rid of some stalls that the CISC may be stuck with (especially memory accesses).
  c) The alignment requirement avoids especially the problem with sending the first part of a store on the way before you're SURE that the second part of it is safe to do.

Finally, to be fair, let me add the two cases that I knew of that were more on the borderline: i960 and Clipper:

CPU      Age       3a 3b 3c 3d     4a 4b 5a 5b     6a 6b    # ODD
RULE     6        =1 =4 5 =0     =0 =1 2 =1     >4 >3
-------------------------------------------------------------------------
J1        5         4+ 8+ 9+ 0      0  1  0  2      4+ 3+    5    Clipper
K1        3         2+ 8+ 9+ 0      0  1  2+ -      5  3+    5    Intel 960KB

(I think an ARM would be in this area as well; I think somebody once sent me an ARM-entry, but I can't find it again; sorry.)

Note: slight modification (I'll integrate this sometime):

From j...@MIT.EDU  Mon Nov 29 12:59:55 1993
Subject: Re: Why are Motorola's slower than Intel's ? [really what's a RISC]
Newsgroups: comp.arch
Organization: Massachusetts Institute of Technology

Since you made your table IBM has released a couple chips that support unaligned accesses in hardware even across cache line boundaries and may store part of an unaligned object before taking a page fault on the second half, if the object crosses a page boundary.

These are the RSC (single chip POWER) and PPC 601 (based on RSC core).
John Carr (j...@mit.edu)

(Back to me; jfc's comments are right; if I had time, I'd add another line to do PPC ... which, in some sense replays the S/360 -> S/370 history of relaxing alignment restrictions somewhat. I conejcture that at least some of this was done to help Apple s/w migration.)

SUMMARY:

1) RISCs share certain architectural characteristics, although there are differences, and some of those differences matter a lot.
2) However, the RISCs, as a group, are much more alike than the CISCs as a group.
3) At least some of these architectural characteristics have fairly serious consequences on the pipelinability of the ISA, especially in a virtual-memory, cached environment.
4) Counting instructions turns out to be fairly irrelevant:
  a) It's HARD to actually count instructions in a meaningful way... (if you disagree, I'll claim that the VAX is RISCier than any RISC, at least for part of its instruction set :-)
  Why: VAX has a MOV opcode, whereas RISCs usually have a whole set of opcodes for {LOAD/STORE} {BYTE, HALF, WORD}
  b) More instructions aren't what REALLY hurts you, anywhere near as much features that are hard to pipeline:
  c) RISCs can perfectly well have string-support, or decimal arithmetic support, or graphics transforms ... or lots of strange register-register transforms, and it won't cause problems ..... but compare that with the consequence of adding a single instruction that has 2-3 memory operands, each of which can go indirect, with auto-increments, and unaligned data...

PART II - ADDRESSING MODES

I promised to repost this with fixes, and people have been asking for it, so here it is again: if you saw it before, all that's really different is some fixes in the table, and a few clarified explanations:

THE GIANT ADDDRESSING MODE TABLE (Corrections happily accepted)

This table goes with the higher-level table of general architecture characteristics.

Address mode summary
r        register
r+        autoincrement (post)        [by size of data object]
-r        autodecrement (pre)        [by size,...and this was the one I meant]
>r        modify base register        [generally, effective address -> base]

NOTE: sometimes this subsumes r+, -r, etc, and is more general, so I categorize it as a separate case.

d        displacement                d1 & d2 if 2 different displacements
x        index register
s        scaled index
a        absolute        [as a separate mode, as opposed to displacement+(0)
I        Indirect

Shown below are 22 distinct addressing modes [you can argue whether these are right categories]. In the table are the number of different encodings/variations [and this is a little fuzzy; you can especially argue about the 4 in the HP PA column, I'm not even sure that's right]. For example, I counted as different variants on a mode the case where the structure was the same, but there were different-sized displacements that hadto be decoded. Note that meaningfully counting addressing modes is at least as bad as meaningfully counting opcodes; I did the best I could, and I spect a lot of hours lookingat manuals for the chips I hadn't programmed much, and in some cases, even after hours, it was hard for me to figure out meaningful numbers... Most of these archiectures are used in general-purpose systems and most have at least one version that uses caches: those are importantbecause many of the issues in thinking about addressing modes come from their interactions with MMUs and caches...

        1  2  3  4  5  6  7  8  9  10 11 12 13 14 15 16 17 18 19 20  21  22
                                                                     r   r
                                                           r  r  r   +d1 +d1
                    r  r  r |              |   r  r |   r  r+ +d +d1 I   +s
           r  r  r  +d +x +s|         s+ s+|s+ +d +d|r+ +d I  I  I   +s  I  
        r  +d +x +s >r >r >r|r+ -r a  a  r+|-r +x +s|I  I  +s +s +d2 +d2 +d2
        -- -- -- -- -- -- --|-- -- -- -- --|-- -- --|-- -- -- -- --- --- ---
AMD 29K  1                  |              |        |   
Rxxx        1               |              |        |   
SPARC       1  1            |              |        |   
88K         1  1  1         |              |        |   
HP PA       2  1  1  4  1  1|              |        |   
ROMP     1  2               |              |        |    
POWER       1  1     1  1   |              |        |    
i860        1  1     1  1   |              |        |    
Swrdfish 1  1  1            |       1      |        |    
ARM      2  2     2  1     1| 1  1
Clipper  1  3  1            | 1  1  2      |        |    
i960KB   1  1  1  1         |       2  2   |    1   |    

S/360       1               |                   1   |    
i486     1  3  1  1         | 1  1  2      |    2  3|   
NSC32K      3               | 1  1  3  3   |       3|             9       
MC68000  1  1               | 1  1  2      |    2   | 
MC68020  1  1               | 1  1  2      |    2  4|                 16  16
VAX      1  3     1         | 1  1  1  1  1| 1     3| 1  3  1  3

COLUMN NOTES

1) Columns 1-7 are addressing modes used by many machines, but very few, if any clearly-RISC architectures use anything else. They are all characterizedby what they don't have: 2 adds needed before generating the address indirect addressing variable-sized decoding
2) Columns 13-15 include fairly simple-looking addressing modes, which however, may require2 back-to-back adds beforet he address is available. [may because someof them use index-register=0 or something to avoid indexing, and usually in such machines, you'll see variable timing figures, depending onuse of indexing.]
3) Columns 16-22 use indirect addressing.

ROW NOTES

1) Clipper & i960, of current chips, are more on the RISC-CISC border, or are sort of "modern CISCs". ARM is also characterized (by ARM people, Hot Chips IV: "ARM is not a "pure RISC".
2) ROMP has a number of characteristics different from the rest of the RISCs, you might call it "early RISC", and it is of course no longer made.
3) You might consider HP PA a little odd, as it appears to have more addressing modes, in the same way that CISCs do, but I don't think this is the case: it's an issue of whether you call something several modes or onemode with a modifier, just as there is trouble counting opcodes (with & without modifiers). From my view, neither PA nor POWER have truly "CISCy" addressing modes.
4) Notice difference between 68000 and 68020 (and later 68Ks): a bunch of incredibly-general & complex modes got added...
5) Note that the addressing on the S/360 is actually pretty simple, mostly base+displacement, although RX-addressing does take 2 regs+offset.
6) A dimension not shown on this particular chart, but also highly relevant, is that this chart shows the different types of modes, not how many addresses can be found in each instruction. That may be worth noting also:

AMD : i960        1        one address per instruction
S/360 - MC68020   2        up to 2 addresses
VAX               6        up to 6

By looking at alignment, indirect addressing, and looking only at those chips that have MMUs, consider the number of times an MMU might be used per instruction for data address translations:

AMD - Clipper     2        [Swordfish & i960KB: no TLB]
S/360 - NSC32K    4
MC68Ks (all)      8
VAX              24

When RS/6000 does unaligned, it must be in the same cache line (and thus also in same MMU page), and traps to software otherwise, thus avoiding numerous ugly cases.

Note: in some sense, S/360s & VAXen can use an arbitrary number of translations per instruction, with MOVE CHARACTER LONG, or similar operations & I don't count them as more, because they're defined to be interruptable/restartable, saving state in general-purpose registers, rather than hidden internal state.

SUMMARY

1) Computer design styles mostly changed from machines with:
    a. 2-6 addresses per instruction, with variable sized encoding address specifiers were usually "orthogonal", so that any could go anywhere in an instruction
    b. sometimes indirect addressing
    c. sometimes need 2 adds before effective address is available
    d. sometimes with many potential MMU accesses (and possible exceptions) per instruciton, often buried in the middle of the instruction, and often after you'd normally want to commit state because of auto-increment or other side effects.
to machines with:
    a. 1 address per instruction
    b. address specifiers encoded in small # of bits in 32-bit instruction
    c. no indirect addressing
    d. never need 2 adds before address available
    e. use MMU once per data access

and we usually call the latter group RISCs. I say "changed" because if you put this table together with the earlier one, which has the age in years, the older ones were one way, and the newer ones are different.

2) Now, ignoring any other features, but looking at this single attribute (architectural addressing features and implementation effects therof), it ought to be clear that the machines in the first part of the table are doing something technically different from those in the second part of the table. Thus, people may sometimes call something RISC that isn't, for marketing reasons, but the people calling the first batch RISC really did have some serious technical issues at heart.

3) One more time: this is not to say that RISC is better than CISC, or that the few in the middle are bad, or anything like that ... but that there are clear technical characteristics...

PART III - MORE ON TERMINOLOGY; WOULD YOU CALL THE CDC 6600 A RISC?

Article: 39495 of comp.arch
Newsgroups: comp.arch
From: ma...@mash.engr.sgi.com (John R. Mashey)
Subject: Re: Why CISC is bad (was P6 and Beyond)
Organization: Silicon Graphics, Inc.
Date: Wed, 6 Apr 94 18:35:01 PDT

In article 2nii0d$k...@crl2.crl.com>, dben...@crl.com (Andrea Chen) writes:

You may be correct on the creation of the term, but RISC does refer to a school of computer design that dates back to the early seventies.

This is all getting fairly fuzzy and subjective, but it seems very confusing to label RISC as a school of thought that dates back to the early 1970s.

1) One can say that RISC is a school of thought that got popular in the early-to-mid 80's, and got widespread commercial use then.
2) One can say that there were a few people (like John Cocke & co at IBM) who were doing RISC-style research projects in the mid-70s. 3) But if you want to go back, as has been discussed in this newsgroup often, a lot of people go back to the CDC 6600, whose design started in 1960, and was delivered in 4Q 1964. Now, while this wouldn't exactly fit the exact parameters of current RISCs, a great deal of the RISC-style approach was there in the central processor ISA:
    a) Load/store architecture.
    b) 3-address register-register instructions
    c) Simply-decoded instruction set
    d) Early use of instructions schedule by compiler, expectation that you'd usually program in high-level language and not often resort to assembler, as you'd expect compiler to do well.
    e) More registers than common at the time
    f) ISA designed to make decode/issue easy

Note that the 360 /91 (1967) offered a good example of building a CISC-architecture into a high-performance machine, and was an interesting comparison to the 6600.

4) Maybe there is some way to claim that RISC goes back to the 1950s, but in general, most machines of the 1950s and 1960s don't feel very RISCy (to me). Consider Burroughs B5000s; IBM 709x, 707x, 1401s; Univac 110x; GE 6xx, etc, and of course, S/360s. Simple load/store architectures were hard to find; there were often exciting instruction decodings required; indirect addressing was popular; machines often had very few accumulators.

5) If you want to try sticking this in the matrix I've published before, as best as I recall, the 6600 ISA generally looked like:

CPU      3a 3b 3c 3d    4a 4b 5a 5b    6a 6b    # ODD
RULE     =1 =4 <5 =0    =0 =1 <2 =1    >4 >3
------------------------------------------------------
CDC 6600  2  *  1  0     0  1  0  1     3  3    4 (but  ~1 if fair)

That is:

2: it has 2 instruction sizes (not 1), 15 & 30 bits (however, were packed into 60-bit words, so if you had 15, 30, 30, the second 30-bitter would not cross word boundaries, but would start in the second word.)
*: 15-and-30 bit instructions, not 32-bit.
1: 1 addressing mode [Note: Time McCaffrey emailed me that one might consider there to be more, i.e., you could set address register to combinations of the others to give autoincrement/decrement/Index+offset, etc). In any case, you compute an address as a simpel combination of 1-2 registers, andthen use the address, without furhter side-effects.
0: no indirect addressing
1: have one memory operand per instruction
0: do NOT support arbitrary alignment of operands in memory (well, it was a word-addressed machine :-)
1: use an MMU for data translation no more than once per instruction (MMU used loosely here)
3,3: had 3-bit fields for addressing registers, both index and FP

Now, of the 10 ISA attributes I'd proposed for identifying typical RISCs, the CDC 6600 obeys 6. It varies in having 2 instruction formats, and in having only 3 bits for register fields, but it had simple packingof the instructions in to fixed-size words, and register/accumulators were pretty expensive in those days (some popular machines only had one accumulator and a few index registers, so 8 of each was a lot). Put another way: it had about as many registers as you'd conveniently build in a high-speed machine, and while they packed 2-4 operations into a 60-bit word, the decode was pretty straighforward. Anyway, given the caveats, I'd claim that the 6600 would fit much better in the RISC part of the original table...

PART IV - RISC, VLIW, STACKS

Article: 43173 of comp.arch
Newsgroups: comp.sys.amiga.advocacy,comp.arch
From: ma...@mash.engr.sgi.com (John R. Mashey)
Subject: Re: PG: RISC vs. CISC was: Re: MARC N. BARR
Date: Thu, 15 Sep 94 18:33:14 PDT

In article 35a1a3$m...@doc.armltd.co.uk>, Clive...@armltd.co.uk writes:

Really? The Venerable John Mashey's table appears to contain as many exceptions to the rule about number of GP registers as most others. I'm sure if one were to look at the various less conventional processors, there would be some clearly RISC processors that didn't have a load-store architecture - stack and VLIW processors spring to mind.

I'm not sure I understand the point. One can believe any of several things:   a) One can believe RISC is some marketing term without technical meaning whatsoever. OR
  b) One can believe that RISC is some collection of implementation ideas. This is the most common confusion.
  c) One can believe that RISC has some ISA meaning (such as RISC == small number of opcodes) ... but have a different idea of RISC than do most chip architects who build them. If you want to pay words extra money every Friday to mean something different than what they mean to practitioners ... then you are free to do so, but you will have difficulty communicating with practitioners if you do so.
  EX: I'm not sure how stack architectures are "clearly RISC" (?) Maybe CRISP, sort of. Burroughs B5000 or Tandem's original ISA: if those are defined as RISC, the term has been rendered meaningless.
  EX: VLIWs: I don't know any reason why I'd call VLIWs, in general, either clearly RISC or clearly not. VLIW is a technique for issuing instructions to more functional units than you have the die space/cycle time to decode more dynamically. There gets to be a fuzzy line between:
    i. A VLIW, especially if it compresses instructions in memory, then expands them out when brought into the cache.     ii. A superscalar RISC, which does some predecoding on the way from memory->cache, adding "hint" bits or rearranging what it keeps there, speeding up cache->decode->issue.
  At least some VLIWs are load/store architectures, and the operations they do look usually look like typical RISC operations. OR, you can believe that:

  c) RISC is a term used to characterize a class of relatively-similar ISAs mostly developed in the 1980s. Thus, if a knowledgable person looks at ISAs, they will tend to cluster various ISAs as:
    1) Obvious RISC, fits the typical rules with few exceptions.
    2) Obviously not-RISC, fits the inverse of the RISC rules with relatively few exceptions. Sometimes people call this CISC ... but whereas RISCs, as a group, have realitvely similar ISAs, the CISC label is sometimes applied to a widely varying set of ISAs.
    3) Hybrid / in-the-middle cases, that either look like CISCy RISCs, or RISCy CISCs. There are a few of these.
  Cases 1-3 are appropriate may apply to reasonably contemporaneous processors, and make some sense. and then 4)
  4) CPUs for which RISC/CISC is probably not a very relevant classification. I.e., one can apply the set of rules I've suggested, and get an exception-count, but it may not mean much in practice, especially when applied to older CPUs created with vastly different constraints than current ones, or embedded processors, or specialized ones. Sometimes an older CPU might have been designed with some similar philosophies (i.e., like CDC 6600 & RISC, sort of) whether or not it happend to fit the rules. Sometimes, die-space constraints my have led to "simple" chips, without making them fit the suggested criteria either. personally, torturous arguments about whether a 6502, or a PDP-8, or a 360 /44 or an XDS Sigma 7, etc, are RISC or CISC ... do not usually lead to great insight. After a while such arguments are counting angels dancing on pinheads ("Ahh, only 10 angles, must be RISC" :-).

In this belief space, one tends to follow Hennessy & Patterson's comment in E.9 that "In the history of computing, there has never been such widespread agreement on computer architecture." None of this pejorative of earlier architectures, just the observation that the ISAs newly-developed in the 1980s were far more similar that the earlier groups of ISAs. [I recall a 2-year period in which I used IBM 1401, IBM 7074, IBM 7090, Univac 1108, and S/360, of which only the 7090 and 1108 bore even the remotest resemblance to each other, i.e., at least they both had 36-bit words.]

Summary: RISC is a label most commonly used for a set of ISA characteristics chosen to ease the use of aggressive implementation techniques found in high-performance processors (regardless of RISC, CISC, or irrelevant). This is a convenient shorthand, but that's all, although it probably makes sense to use the term thae way it's usually meant by people who do chips for a living.

July 28, 2019

Ponylang (SeanTAllen)

Last Week in Pony - July 28, 2019 July 28, 2019 12:03 PM

Last Week In Pony is a weekly blog post to catch you up on the latest news for the Pony programming language. To learn more about Pony check out our website, our Twitter account @ponylang, or our Zulip community.

Got something you think should be featured? There’s a GitHub issue for that! Add a comment to the open “Last Week in Pony” issue.

Andreas Zwinkau (qznc)

Sacrifice of Scar July 28, 2019 12:00 AM

Short Lion King fanfic

Read full article!

July 27, 2019

Ponylang (SeanTAllen)

0.30.0 Released July 27, 2019 10:50 AM

Pony version 0.30.0 is now available. This is a high-priority bug fix release that contains a fix for a bug that could cause Pony programs to segfault. We recommend upgrading as soon as possible. Also note, this release does include a breaking API change (albeit small). Please note that, at this time, the update to Homebrew has yet to be merged, but should be within a day. All other release targets are completed.

July 26, 2019

Awn Umar (awn)

working with forks in git July 26, 2019 12:00 AM

Working with forks entails having a single codebase but being able to push and pull from multiple remote endpoints. This can get fairly annoying, especially for Go projects which have a specified import path which must be reflected in the filesystem path (at least before modules).

Luckily git handles this use-case pretty well.

$ # Working with a hypothetical project hosted on GitHub.
$ cd $GOPATH/src/github.com
$ 
$ # Create a directory for the source owner, to preserve import paths.
$ mkdir source && cd source
$ 
$ # Clone your own fork into this directory.
$ git clone git@github.com:me/repo.git && cd repo
$ 
$ # Add the original repository URL as a new remote.
$ git remote add upstream git@github.com:source/repo.git
$ 
$ git pull upstream master # Pull from original repository
$ git push origin master   # Push to forked repository

July 25, 2019

Derek Jones (derek-jones)

Want to be the coauthor of a prestigious book? Send me your bid July 25, 2019 07:22 PM

The corruption that pervades the academic publishing system has become more public.

There is now a website that makes use of an ingenious technique for helping people increase their paper count (as might be expected, the competitive China thought of it first). Want to be listed as the first author of a paper? Fees start at $500. The beauty of the scheme is that the papers have already been accepted by a journal for publication, so the buyer knows exactly what they are getting. Paying to be included as an author before the paper is accepted incurs the risk that the paper might not be accepted.

Measurement of academic performance is based on number of papers published, weighted by the impact factor of the journal in which they are published. Individuals seeking promotion and research funding need an appropriately high publication score; the ranking of university departments is based on the publications of its members. The phrase publish or perish aptly describes the process. As expected, with individual careers and departmental funding on the line, the system has become corrupt in all kinds of ways.

There are organizations who will publish your paper for a fee, 100% guaranteed, and you can even attend a scam conference (that’s not how the organizers describe them). Problem is, word gets around and the weighting given to publishing in such journals is very low (or it should be, not all of them get caught).

The horror being expressed at this practice is driven by the fact that money is changing hands. Adding a colleague as an author (on the basis that they will return the favor later) is accepted practice; tacking your supervisors name on to the end of the list of authors is standard practice, irrespective of any contribution that might have made (how else would a professor accumulate 100+ published papers).

I regularly receive emails from academics telling me they would like to work on this or that with me. If they look like proper researchers, I am respectful; if they look like an academic paper mill, my reply points out (subtly or otherwise) that their work is not of high enough standard to be relevant to industry. Perhaps I should send them a quote for having their name appear on a paper written by me (I don’t publish in academic journals, so such a paper is unlikely to have much value in the system they operate within); it sounds worth doing just for the shocked response.

I read lots of papers, and usually ignore the list of authors. If it looks like there is some interesting data associated with the work, I email the first author, and will only include the other authors in the email if I am looking to do a bit of marketing for my book or the paper is many years old (so the first author is less likely to have the data).

I continue to be amazed at the number of people who continue to strive to do proper research in this academic environment.

I wonder how much I might get by auctioning off the coauthoship of my software engineering book?

July 24, 2019

Jeremy Morgan (JeremyMorgan)

Creating Trimmed Self Contained Executables in .NET Core July 24, 2019 02:09 PM

I'm going to show you a cool new feature in .NET Core 3.0.

Let's say you want to create a simple, lean executable you can build and drop on to a server.

For an example we'll create a console app that opens a line of text, reads it and displays the output.

First, let's create a new .NET Core app:

dotnet new console

This will scaffold a new console application.

Now run it:

dotnet run

It should look something like this:

Creating Trimmed Self Contained Executables in Net Core

I'm on a Mac here, but it doesn't matter as long as your development box has the .NET Core CLI Installed.

This displays "Hello World" on the console.

Now, lets create a file called file.txt:

this is a file! With some lines whatever

Doesn't matter what you put in here, as long as it has some text in it.

Next we'll create a something that will read those lines and display them. Remove the "Hello World!" code and replace it with this:

``` string[] lines = System.IO.File.ReadAllLines(@"test.txt");

    foreach (string line in lines)
    {
        Console.WriteLine("\t" + line);
    }

Console.WriteLine("Press any key to exit."); System.Console.ReadKey(); ```

This is pretty much your basic cookie cutter code for:

  • opening up a file
  • reading it into a string array
  • loop through the array line by line
  • print each line
  • exit

Pretty simple stuff. When I run it on my machine it looks like this:

Creating Trimmed Self Contained Executables in Net Core

And that's great. But I'm on a Mac here, what if I want it to run on a Windows Machine? Linux? No problem, this is .NET Core right? We'll just publish it to multiple targets.

But what if .NET Core isn't installed on the machine?

What if I just want a simple executable I can run to read this file without a pile of files or .Net core installed?

Publishing in .Net Core

Let's back up a little. .NET Core has had publishing profiles for a long time. The idea behind "target" publishing is one of the biggest selling points of the platform. Build your app, then publish it for a specific target, Windows, OSX, or Linux.

You can publish it a few different ways

  • Framework Dependent Deployment - This means relies on a shared version of .NET Core that's installed on the Computer/Server.
  • Self Contained Deployment - This doesn't rely on .Net Core being installed on the server. All components are included with the package (tons of files usually).
  • Framework Dependent Executables - This is very similar to a framework dependent deployment, but it creates executables that are platform specific, but require the .NET Core libraries.

Ok, so what's this cool new feature I'm going to show?

Well, when you do a self contained deployment it's cool because you don't need the runtime installed, but it ends up looking something like this:

Creating Trimmed Self Contained Executables in Net Core

This is the application we just built published as a Self Contained Deployment for Windows. Yikes.

Let's say you wanted to share this file reader application, and asking someone to copy all these files into a folder to run something to read a text file. It's silly.

New Feature: Self Contained Executables

So to build that self contained executable I ran the following command:

dotnet publish -c release -r win10-x64

This should look pretty familiar to you if you've done it before. But .NET Core 3.0 has a cool new feature:

dotnet publish -r win-x64 -c Release /p:PublishSingleFile=true Using this flag, it will build something a little different:

Creating Trimmed Self Contained Executables in Net Core

That's MUCH better. It's now a single exe and .pdb. If we look at the info, it's not super small:

Creating Trimmed Self Contained Executables in Net Core

But it includes the .NET Core runtime with it as well. And here's another cool feature.

dotnet publish -r win-x64 -c Release /p:PublishSingleFile=true /p:PublishTrimmed=true

So in our example it doesn't change the size, but if you have a large complex application with a lot of libraries, if you just publish it to a single file, it can get HUGE. By adding the PublishTrimmed flag it will only extract the libraries you need to run the application.

So when we copy the files to a Windows 10 machine, we have a nice small package:

Creating Trimmed Self Contained Executables in Net Core

And we run it and it works! Without .NET Core!

Creating Trimmed Self Contained Executables in Net Core

and if I change my target:

dotnet publish -r linux-x64 -c Release /p:PublishSingleFile=true /p:PublishTrimmed=true

I can run it on a Linux server without .NET Core just as easily:

Creating Trimmed Self Contained Executables in Net Core

Just remember on a Linux machine you won't need the .NET Core runtime, but you will need the Prerequisites for .NET Core on Linux installed.

Conclusion

So this is a cool feature of .NET Core 3.0. If you want to build trimmed down self contained executables for any of the platforms, you can do it easily with a couple of flags.

This is great for those stupid simple things: console apps, data readers, microservices, or whatever you want to build and just drop on a machine with no hassle. I thought it was a cool feature to show.

If you want to master .NET Core, check out the ASP.Net Core Path on Pluralsight. The courses go pretty in depth and are a great way to ramp up.

Yell at me on Twitter with questions or comments!



What is your .NET Core IQ?


My ASP.NET Core IQ is 200. Can you beat it? Take the test now to find out your score!!

Benjamin Pollack (gecko)

Falsehoods Programmers Believe About Cats July 24, 2019 12:43 PM

Inspired by Falsehoods Programmers Believe About Dogs, I thought it would be great to offer you falsehoods programmers believe about mankind’s other best friend. But since I don’t know what that is, here’s instead a version about cats.

  1. Cats would never eat your face.
  2. Cats would never eat your face while you were alive.1
  3. Okay, cats would sometimes eat your face while you’re alive, but my cat absolutely would not.
  4. Okay, fine. At least I will never run out of cat food.
  5. You’re kidding me.
  6. There will be a time when your cat knows enough not to vomit on your computer.
  7. There will be a time when your cat cares enough not to vomit on your computer.
  8. At the very least, if your cat begins to vomit on your computer and you try to move it to another location, your cat will allow you to do so.
  9. When your cat refuses to move, it will at least not manage to claw your arm surprisingly severely while actively vomiting.
  10. Okay, but at least they won’t attempt to chew the power cord while vomiting and clawing your hand, resulting in both of you getting an electric shock.
  11. …how the hell are you even alive?2
  12. Cats enjoy belly rubs.
  13. Some cats enjoy belly rubs.
  14. Cats reliably enjoy being petted.
  15. Cats will reliably tell you when they no longer enjoying being petted.
  16. Cats who trust their owners will leave suddenly when they’re done being petted, but at least never cause you massive blood loss.
  17. Given all of the above, you should never adopt cats.
  18. You are insane.

Happy ten years in your forever home, my two scruffy kitties. Here’s to ten more.


  1. Here, ask Dewey, he knows more about it than I do. [return]
  2. Because, while my cat has absolutely eaten through a power cord, this is an exaggeration. The getting scratched while trying to get my cat not to puke on a computer I was actively using happened at a different time from the power cord incident. Although this doesn’t answer the question how she is alive. [return]

Zac Brown (zacbrown)

Book Notes: Deep Work by Cal Newport July 24, 2019 07:00 AM

Note: These notes are lightly organized and reflect my own takeaways from this book. They’re captured here for my own purposes. If you should find them useful, great!

Rule #1 - Work Deeply

Depth Philosophy - styles of achieving depth

  • Monastic - isolationist, doesn’t work well for the vast majority of folks and not likely to work for me
  • Bimodal - a few days on, a few days off
    • utilized by Carl Jung and Adam Grant
    • periods of intense deep work followed by periods of shallower work
    • probably most useful in achieving big leaps” periodically
  • Rhythmic - best for day to day life
    • Cal Newport’s preferred
    • Do it every day. Probably best for growing the muscle initially.
    • For future - probably intersperse this approach with Bimodal.

Ritualize

  • develop habits that reduce willpower consumption
    • consistent routines and organization, e.g.
      • same ritual for starting and ending your work day
      • wearing the same clothing for your work week (Zuckerberg/Jobs style)
    • consistent places to work and for how long
  • pay attention to rituals before you enter Deep Work, look for opportunities to repeat
    • meditation
    • cup of tea
    • quick walk

Make Grand Gestures

  • e.g. holing up in a hotel suite ($1000/night) for a few days to complete something
  • probably less important/practical in most cases
  • practically, this may be as simple as paying for a private office

Don’t Work Alone

  • hard to do this if you’re a remote worker
  • How can I optimize this in business trips? Hole up in a conference room with key folks?
  • What about this will help me create breakthroughs?
  • whiteboard effect is very helpful - like time spent with J and so on

Execute Like a Business

Discipline #1 - Focus on the wildly important.

  • Saying no more than saying yes. Both to yourself and to others.
  • More importantly, say Yes to the truly important and lean into it.
    • Stay out of the attention grabbing fray to avoid saying No so often that it drains willpower.

Discipline #2 - Act on Lead Measures

  • Tighten the OODA loop in other words.
  • Easiest metric is time spent working deeply. If you’re not measuring this then you’re not getting a good sense of the quality of your time.

Discipline #3 - Keep a Compelling Scoreboard

  • People play differently when you’re keeping score.”
  • What’s measured gets managed.”
  • Try tally mark system.
    • Track hours spent working deeply. Each tally is an hour.
    • Circle tally marks where something big is finished.
  • Shoot for at least 50% deep work initially. Move up from there. Ideally >75%.

Discipline #4 - Create a Cadence of Accountability

  • Reviewing periodically helps put progress toward a goal into context.
  • Weekly review of notebook w/ score card.
    • Do it Saturday morning?
    • Review what went well and what went poorly.
    • Revise plan for following weekly to maximize likelihood of success.

Be Lazy

  • Make time for Leisure. Continually keeping the mind busy prevents it from forming those really deep connections.
  • Practically speaking, end the day at 17:30-18:30.
  • Working beyond 8 hours results in working on things which are not that important.
    • ~4-5 hours of deep work is possible per day. Anything else should not dominate.
  • Develop a shutdown ritual.
    • Note everything in flight 15-20 minutes before end of work day.
    • Provide next steps for anything that is in flight.
    • Note what is completed from the todo list of the day.
    • Be consistent. Have a verbal cue. e.g. Shutdown complete.”
    • Basic outline of steps:
      1. Check email/Slack/Basecamp for nay outstanding tasks or requests.
      2. Transfer tasks that are on the mind to task list and add relevant notes for next steps.
        • There may be multiple lists of multiple projects ongoing.
      3. Quickly skim every task list and compare against the next few days of the calendar. Note any deadlines or appointments that impact task prioritization. Rearrange next day as needed.
      4. Say the words. e.g. Shutdown complete.”

Rule #2 - Embrace Boredom

Boredom is directly related to ability to focus for long periods of time. Distractions, e.g. smartphones, reduce our capacity to be focused because they force stimulation. Focus is not strictly stimulating.

Don’t take breaks from distraction. Instead take breaks from focus.

  • Internet Sabbath doesn’t address the underlying problem. Relies on taking a break from distractions.
  • Schedule in advance when Internet use will occur. Record the next time Internet use is allowed.
    1. This should work even with need for Internet as part of your job.
    2. Time outside the block should be absolutely free of Internet use.
    3. Scheduling Internet use at home can further improve concentration.
      • Just always have a notebook for thoughts/queries to look up?
  • This is all about rewiring the brain to be comfortable resisting distracting stimuli.

Work Like Teddy Roosevelt

  • Estimate how long a task should take and publicly commit to a deadline. For example - the person you’re doing the work for.
  • Start small initially. Focusing intensely with no distraction breaks. Over time, quality of intensity and distraction reduction should increase.

Meditate Productively

  • Focus on a single well defined professional problem. Can be done while walking, on transit, or on a car ride.
  • Lets you make use of your brain while doing simple mechanical tasks.
  • This is less about productivity and more about improving deep focus ability. Forces you to resist distraction and return attention repeatedly to a well-defined problem.
  • Suggestions to make this more successful:
    1. Be wary of distractions and looping.
      • Similar to meditation, when you notice the mind wandering, bring attention back to the problem you’re focusing on.
      • Note when you’ve rehashed the same facts or observations repeatedly. Note it and redirect attention to the next problem solving step.
    2. Structure Your Deep Thinking
      • Structure helps decompose a problem. It can help to develop a framework for assessing the given problem and reasoning about it.
      • Useful steps include:
        • review the relevant variables
        • define the next-step question
        • Using the two pieces of info above allows you to plan and iterate on targeting your attention.
        • after solving the next-step question, consolidate your gains by reviewing clearly the answer identified
        • push yourself to the next level of depth by restarting the process

Memorize a Deck of Cards

  • Memory training exercises help improve depth of focus. AKA helps by improving concentration.
  • May be useful to learn some basic memorization training exercises like memorizing the order of a deck of cards.

Rule #3 - Quit Social Media

The perceived value of social media is not nearly as deep or meaningful as they’d have us believe. For example - liking a Facebook post is not a replacement for having dinner and conversation with that same person.

  • Consider: Any benefit approach” - If X tool provides any benefit at all, it is worth using.”
    • This is obviously false, especially with things that consume our attention and willpower.
    • This is where social media pretty much categorically falls.
  • Consider: craftsman approach” - A tool is worth using only if its benefits substantially outweigh the negative impacts.”

Apply the Law of the Vital Few to Your Internet Habits

  • Start by identifying the main high level goals in professional and personal life.
    • Professional Goals
      • Be an effective Engineering Leader for my team, our projects, and the broader company.
      • Be an advocate for the team for work/life balance and reasonable work expectations.
      • Be a resource for my team and peers to help them where possible.
    • Professional Activities that further these goals
      • Engineering Leader
        • regular 1:1’s with my team -realistic costing of our projects
        • keeping SLT expectations realistic
        • deeply consider designs and plans for projects
      • Advocate
        • encourage engineers to point out when things are negatively impacting their lives
        • telling SLT when our workload is too much and manage expectations
  • Any activity should be weighed against aforementioned goals and activities to identify whether a tool significantly or detracts from the goals and activities.
  • While there may in fact be benefits to a given social network, when evaluated holistically, they general don’t outweigh the costs.

The Law of the Vital Few

  • In many settings, 80 percent of a given effect is due to just 20 percent of the possible causes.
  • In context of achieving goals, while there may be 10-15 activities that contribute towards goals, there’s probably 2-3 that really make the biggest difference. You should focus on those top 2-3 activities.
    • This is especially important in view of our limited attention and willpower. E.g. Do a couple things well rather than many things poorly.
    • By limiting focus to the most important activities, this probably magnifies their effectiveness.

Quitting Social Media

  • Use the pack” method. Pack away everything and as you need it, unpack it again back into your life. In practice, this means stop using Facebook, LinkedIn, Twitter, and so on and see who actually reaches out to find out where you went. Chances are, no one will.
  • In the case of Social Media, log out of all of them and login as needed. Block them from your devices in most cases.
  • Stuff, social networks, habits, etc accumulate without us noticing. The approach above helps identify what is truly useful.
  • Practically speaking, ask the following two questions of each social media service:
    1. Would the last thirty days have been notably if I had been able to use this service?
    2. Do people care that I wasn’t using this service?
  • If the answer is no to both, get rid of it.
  • If the answer is yes to both, keep using it.
  • If the answer is ambiguous, use best judgement.
    • Lean toward no” in general - they’re designed to be addictive and easily become a time suck.

Don’t Use the Internet to Entertain Yourself

  • Both from an enrichment and relaxing effect, the Internet of today does a piss poor job of being a leisure activity.
    • The modern Internet is designed to generate FOMO and anxiety.
    • It is best treated with skepticism and avoided.
    • Better activities might include:
      • physical hobby like woodworking
      • reading books, papers, magazines, etc
      • writing blog posts, essays, etc
      • cooking
      • watching a movie or specific show - not surfing
  • Put thought into your leisure time. Schedule if necessary to help yourself make progress toward personal goals.
  • Planning leisure also helps avoid the surfing” trap. You know what you’ll do so no need to surf.
    • This doesn’t mean being inflexible - just have a plan for how you’d like to use your time so that you have a default that isn’t surfing.

Rule #4 - Drain the Shallows

  • You can’t completely eliminate the shallow work. A non-trivial amount of shallow work is required to stay successful at work. Sadly, you cannot exist solely in the Ivory Tower.
    • e.g. You can avoid checking email every ten minutes but probably can’t entirely ignore email like a hermit.
  • There’s a limit to the amount of deep work you can perform in a single day. Shallow work isn’t problematic till it begins to crowd out your deep work time.
  • Fracturing deep work time can be highly detrimental. Ideal to get big chunks of deep work and batch the shallow work together rather than interspersing.
  • Treat shallow work with suspicion in general. If someone asks you to do something and you can find little depth to it, push back.

Schedule Every Minute of Your Day

  • We often spend too much of our days on autopilot - being reactive. Giving too little thought to how we’re using our time.
  • Schedule in a notebook the hours of your work day.
    • block off times for tasks which fall outside meetings
    • if interruptions occur, take a moment to reschedule and shuffle the day
      • be flexible - if you hit an important insight, ignore the schedule and carry on with the insight
    • Periodically evaluate What makes the most sense for me to do with the time that remains?” throughout the day. This affords flexibility and spontaneity to ensure you’re always working on the most important thing.

Quantify the Depth of Every Activity

  • For any task, ask How long would it take in months to train a smart recent college graduate with no special training in my field to complete this task?”
  • If a task would take many months to train someone, then that’s the deep work to focus on.

Ask for a Shallow Work Budget

  • Ask boss or self, how much shallow work should you be doing? What percentage of your time?
  • After some analysis, may find that:
    • you avoid some projects which seem to have a high amount of shallow work
    • begin purging shallow work from current workload
  • Important to be aligned with boss. Don’t want mismatched expectations but be clear on what you believe is shallow work and why. Try to prune as much as possible.
  • Quantifying the percentage can be useful if you ned to show that there’s too much shallow work on your plate and want to force your management’s hand to agree to the max amount of shallow work on your plate.

Finish Work by Five Thirty

  • fixed schedule productivity” - work backwards from the endpoint to identify productivity strategies to ensure a 17:30 end time to work day
  • Identify rules and habits that help enforce these regular work hours. By maintaining the regular work hours, you preserve your ability to sustain Deep Work regularly. It forces draining of the shallows and restoration of your limited willpower/focus.
  • Be extremely cautious with the use of the word Yes.”
    • Be clear in your refusal but vague in your reasons.
      • Giving details allows the requestor of your time wiggle room to get you to say yes in a follow up request.
    • Resist the urge to offer consolation prizes when you turn down a request. Don’t invent shallow work for yourself.
  • The 17:30 cutoff forces good use of time earlier in the day. Can’t afford to waste the morning or to let a big deadline creep up on you.
  • This is all about a scarcity mindset. Drain the shallows mercilessly. Use this Fixed Schedule Productivity ruthlessly and relentlessly.
  • Just because your boss sends email or Slack messages at 20:30 doesn’t mean they’re sitting there waiting on a reply that evening. They’re probably just working through their backlog at their own pace.

Become Hard to Reach

  1. Make People Who Send You Email Do More Work
    • Provide people options for contact with clear expectations.
      • e.g. Shoot an email here if you have something you think I’d find interesting. I guarantee no response unless I am specifically interested in engaging with you about the topic or opportunity.”
    • sender filter” - This helps the sender understand they should only send you something they truly feel you’d be interested in.
      • Most people easily accept the idea that you have a right to control you own incoming communication as they would like to enjoy the same right. More importantly, people appreciate clarity.”
  2. Do More Work When You Send or Reply to Emails
    • Avoid productivity land mines
      • e.g. Great to meet you last week. Do you want to grab coffee?”
      • They prey on your desire to be helpful and you dash off a quick response. In the short term, feels productive, but in the long term, it just adds a lot of extra noise.
    • Best to consider these messages with requests in them as: What is the project represented by this message, and what is the most efficient (in terms of messages generated) process for bringing this project to a successful conclusion?”
    • Better response to request for coffee would be process centric, taking the time to describe the process you identified in the message, points out the current step, and emphasizes the next step.
      • e.g. to meet up, specify a place and a couple of times that you’re available to meet. Provide an out in case the details conflict.
    • Process centric email closes the loop and removes it from mental foreground. You’ve dealt with it, next thing.”
  3. Don’t Respond
  • It’s the sender’s responsibility to convince the receiver that a reply is worthwhile.”
  • Simple rules to Professorial Email Sorting”
    • It’s ambiguous or otherwise makes it hard for you to generate a reasonable response.
    • It’s not a question or proposal that interests you.
    • Nothing really good would happen if you did respond and nothing really bad would happen if you didn’t.
  • This can be an uncomfortable practice due to common conventions around email with expecting replies. There are also exceptions - e.g. your boss emails you.
  • Develop the habit of letting small bad things happen. If you don’t, you’ll never find time for the truly life-changing big things.”
  • People are adaptable. If they realize you only respond to requests relevant to your interests then they’ll adjust.

Ponylang (SeanTAllen)

Pony Stable 0.2.1 Released July 24, 2019 12:00 AM

July 21, 2019

Ponylang (SeanTAllen)

Last Week in Pony - July 21, 2019 July 21, 2019 02:48 PM

Last Week In Pony is a weekly blog post to catch you up on the latest news for the Pony programming language. To learn more about Pony check out our website, our Twitter account @ponylang, or our Zulip community.

Got something you think should be featured? There’s a GitHub issue for that! Add a comment to the open “Last Week in Pony” issue.

July 20, 2019

Leo Tindall (LeoLambda)

Simple Elixir Functions July 20, 2019 07:03 PM

I’ve been playing around with Elixir, which is a pretty cool language - specifically, I’m reading Programming Elixir 1.6 which is free for anyone with a .edu email address from The Pragmatic Bookshelf. reverse/1 I enjoy functional programming, so of course the first two functions I made were list operations. First up is reverse(list) (written as reverse/1, meaning it takes one argument), which reverses a list. # reverse/1 # Reverse the given list.

Sevan Janiyan (sevan)

A week of pkgsrc #13 July 20, 2019 04:56 PM

With the lead up to the release of pkgsrc-2019Q2 I picked up the ball with the testing on OS X Tiger again. It takes about a month for a G4 Mac Mini to attempt a bulk build of the entire pkgsrc tree with compilers usually taking up most days without success. To reduce the turnaround …

Awn Umar (awn)

memory retention attacks July 20, 2019 12:00 AM

In my post on implementing an in-memory encryption scheme to protect sensitive information, I referenced a mitigation strategy called a Boojum. It is described by Bruce Schneier, Niels Ferguson and Tadayoshi Kohno in their book, Cryptography Engineering: Design Principles and Practical Applications.

A number of people asked me about the mechanisms of the attack and the scheme, so I am including the relevant parts here. It is an excellent resource and I recommend that you go and buy it.

page one page two page three

[32] Giovanni Di Crescenzo, Niels Ferguson, Russel Impagliazzo, and Markus Jakobsson. How to Forget a Secret. In Christoph Meinel and Sophie Tison, editors, STACS 99, volume 1563 of Lecture Notes in Computer Science, pages 500-509. Springer-Verlag, 1999.

[57] Peter Gutmann. Secure Deletion of Data from Magnetic and Solid-State Memory. In USENIX Security Symposium Proceedings, 1996. Available from http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html.

[59] J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, Jacob Appelbaum, and Edward W. Felten. Lest We Remember: Cold Boot Attacks on Encryption Keys. In USENIX Security Symposium Proceedings, pages 45-60, 2008.

July 19, 2019

Jeremy Morgan (JeremyMorgan)

You Can Get the Source Code for Apollo 11 and Take a Course on It July 19, 2019 04:47 AM

In software development you'll hear the term "moon shot". If something is a "moon shot" it's something that's extraordinarily difficult, like landing on the moon. We say this about some app doing something cool, but what about the software that... landed us on the moon? What was the original "moon shot" all about? 

The Software That Put Us on the Moon

Margaret Hamilton

Meet Margaret Hamilton ). She was the director of Software engineering at MIT Instrumentation Laboratory, which was contracted to build the onboard software for the Apollo Space Program. This is her standing with the stack of source code used to launch us to the moon. Today we complain when Visual Studio runs slow. 

If you're a developer of any kind, you owe a lot of thanks to her, since "Software Engineer" wasn't even a term before she came along. 

So imagine having to write software that had to work the first time it ran, and run perfectly, or people would die. Sounds like a lot of pressure right? Margaret and her team did it. They built it, and it worked. It succeeded. 

They made a piece of machinery land on the moon with a computer that had about .08 percent of the processing power of an iPhone. 

You can get a lot more background and details here.

The Source Code

A few years ago, Ron Burkey released the source code for Apollo 11. He put a ton of work into this. This is the source code for the Apollo Guidance Computer, or AGC. 

Not only can you download the source code, but he created a simulator for the AGC, at the Virtual AGC Page. You can dig deep into the systems and how they work, trust me it's a rabbit hole for geeks. Tons of awesome stuff here. There's even a kinder, gentler introduction to the AGC you can check out to get familiar.

The AGC source code is written in assembler, which is foreign to most of us. I've played around enough with x86 assembler to know it's not my calling, but perusing through a lot of this source code, you can piece together how some of this stuff works.  

Comanche and Luminary

If you dig into the code, you'll see it's divided into two parts, Comanche and Luminary. 

Comanche is the software for the Command Module and Luminary is the Lunar Module

The Command Module was the cone that contained the crew and vital equipment, and was the vessel returned back to earth. 

Apollo 11 Source Code

The Lunar Module well, it was the module that landed on the moon. 

Apollo 11 Source Code

It's very interesting to see how these systems interact. When you look through the source code you can see a lot of cool hints how everything works. 

The DSKY 

The DSKY was the user interface for the computer. You could enter commands through a calculator-like interface and it was connected directly to the AGC. The commands contained a verb and a noun for each command. 

Apollo 11 Source Code

If you dig deep enough into the source code you'll see a lot of references to the DSKY and commands related to it. It was a marvel of engineering for its time. Of course, there is a DSKY simulator if you want to play around. 

How Do I Know so Much About This? 

I may sound like a seasoned expert here, but I just took this free course on the code of the Apollo 11, then started digging in the code and researching stuff. 

Apollo 11 Source Code

Click Here to Take This Course for FREE

For the 50th anniversary of the Apollo 11 Moon Landing, Simon Allardice created this awesome course on Pluralsight exploring the AGC. It's a quick course, and it won't teach you how to write AGC specific assembler code, but it will give you a good idea of how things worked back then and how the whole thing fits together. After watching this you'll get a better understanding of the source code, and start looking for easter eggs. 

Conclusion

Ok if you're a nerd like me, you love stuff like this. Obviously this isn't some kind of actively developed code where you can do pull requests and add features. Nobody will hire you because you put AGC knowledge on your resume. But it sure is fun to go through this code and figure out how things worked, and look at the comical comments in the code. It's fascinating and well worth the time to explore if you're genuinely curious about this kind of stuff.

You can always yell at me on Twitter if you'd like to discuss it further. 

Enjoy! 



What is your DevOps IQ?

what's your devops score


My Devops Skill IQ is 232. Can you beat it? Take the test now to find out your Devops IQ score!!

Joe Nelson (begriffs)

History and effective use of Vim July 19, 2019 12:00 AM

This article is based on historical research and on simply reading the Vim user manual cover to cover. Hopefully these notes will help you (re?)discover core functionality of the editor, so you can abandon pre-packaged vimrc files and use plugins more thoughtfully.

physical books

physical books

To go beyond the topics in this blog post, I’d recommend getting a paper copy of the manual and a good pocket reference. I couldn’t find any hard copy of the official Vim manual, and ended up printing this PDF using printme1.com. The PDF is a printer-friendly version of the files $VIMRUNTIME/doc/usr_??.txt distributed with the editor. For a convenient list of commands, I’d recommend the vi and Vim Editors Pocket Reference.

Table of Contents

History

Birth of vi

Vi commands and features go back more than fifty years, starting with the QED editor. Here is the lineage:

  • 1966 : QED (“Quick EDitor”) in Berkeley Timesharing System
  • 1969 Jul: moon landing (just for reference)
  • 1969 Aug: QED -> ed at AT&T
  • 1976 Feb: ed -> em (“Editor for Mortals”) at Queen Mary College
  • 1976 : em -> ex (“EXtended”) at UC Berkeley
  • 1977 Oct: ex gets visual mode, vi

hard copy terminal

You can discover the similarities all the way between QED and ex by reading the QED manual and ex manual. Both editors use a similar grammar to specify and operate on line ranges.

Editors like QED, ed, and em were designed for hard-copy terminals, which are basically electric typewriters with a modem attached. Hard-copy terminals print system output on paper. Output could not be changed once printed, obviously, so the editing process consisted of user commands to update and manually print ranges of text.

video terminal

By 1976 video terminals such as the ADM-3A started to be available. The Ex editor added an “open mode” which allowed intraline editing on video terminals, and a visual mode for screen oriented editing on cursor-addressible terminals. The visual mode (activated with the command “vi”) kept an up-to-date view of part of the file on screen, while preserving an ex command line at the bottom of the screen. (Fun fact: the h,j,k,l keys on the ADM-3A had arrows drawn on them, so that choice of motion keys in vi was simply to match the keyboard.)

Learn more about the journey from ed to ex/vi in this interview with Bill Joy. He talks about how he made ex/vi, and some things that disappointed him about it.

Classic vi is truly just an alter-ego of ex – they are the same binary, which decides to start in ex mode or vi mode based on the name of the executable invoked. The legacy of all this history is that ex/vi is refined by use, requires scant system resources, and can operate under limited bandwidth communication. It is also available on most systems and fully specified in POSIX.

From vi to vim

Being a derivative of ed, the ex/vi editor was intellectual property of AT&T. To use vi on platforms other than Unix, people had to write clones that did not share in the original codebase.

Some of the clones:

  • nvi - 1980 for 4BSD
  • calvin - 1987 for DOS
  • vile - 1990 for DOS
  • stevie - 1987 for Atari ST
  • elvis - 1990 for Minix and 386BSD
  • vim - 1991 for Amiga
  • viper - 1995 for Emacs
  • elwin - 1995 for Windows
  • lemmy - 2002 for Windows

We’ll be focusing on that little one in the middle: vim. Bram Moolenaar wanted to use vi on the Amiga. He began porting Stevie from the Atari and evolving it. He called his port “Vi IMitation.” For a full first-hand account, see Bram’s interview with Free Software Magazine.

By version 1.22 Vim was rechristened “Vi IMproved,” matching and surpassing features of the original. Here is the timeline of the next major versions, with some of their big features:

1991 Nov 2 Vim 1.14: First release (on Fred Fish disk #591).
1992 Vim 1.22: Port to Unix. Vim now competes with Vi.
1994 Aug 12 Vim 3.0: Support for multiple buffers and windows.
1996 May 29 Vim 4.0: Graphical User Interface (largely by Robert Webb).
1998 Feb 19 Vim 5.0: Syntax coloring/highlighting.
2001 Sep 26 Vim 6.0: Folding, plugins, vertical split.
2006 May 8 Vim 7.0: Spell check, omni completion, undo branches, tabs.
2016 Sep 12 Vim 8.0: Jobs, async I/O, native packages.

For more info about each version, see e.g. :help vim8. To see plans for the future, including known bugs, see :help todo.txt.

Version 8 included some async job support due to peer pressure from NeoVim, whose developers wanted to run debuggers and REPLs for their web scripting languages inside the editor.

Vim is super portable. By adapting over time to work on a wide variety of platforms, the editor was forced to keep portable coding habits. It runs on OS/390, Amiga, BeOS and BeBox, Macintosh classic, Atari MiNT, MS-DOS, OS/2, QNX, RISC-OS, BSD, Linux, OS X, VMS, and MS-Windows. You can rely on Vim being there no matter what computer you’re using.

In a final twist in the vi saga, the original ex/vi source code was finally released in 2002 under a BSD free software license. It is available at ex-vi.sourceforge.net.

Let’s get down to business. Before getting to odds, ends, and intermediate tricks, it helps to understand how Vim organizes and reads its configuration files.

Configuration hierarchy

I used to think, incorrectly, that Vim reads all its settings and scripts from the ~/.vimrc file alone. Browsing random “dotfiles” repositories can reinforce this notion. Quite often people publish monstrous single .vimrc files that try to control every aspect of the editor. These big configs are sometimes called “vim distros.”

In reality Vim has a tidy structure, where .vimrc is just one of several inputs. In fact you can ask Vim exactly which scripts it has loaded. Try this: edit a source file from a random programming project on your computer. Once loaded, run

:scriptnames

Take time to read the list. Try to guess what the scripts might do, and note the directories where they live.

Was the list longer than you expected? If you have installed loads of plugins the editor has a lot to do. Check what slows down the editor most at startup by running the following and look at the start.log it creates:

vim --startuptime start.log name-of-your-file

Just for comparison, see how quickly Vim starts without your existing configuration:

vim --clean --startuptime clean.log name-of-your-file

To determine which scripts to run at startup or buffer load time, Vim traverses a “runtime path.” The path is a comma-separated list of directories that each contain a common structure. Vim inspects the structure of each directory to find scripts to run. Directories are processed in the order they appear in the list.

Check the runtimepath on your system by running:

:set runtimepath

My system contains the following directories in the default value for runtimepath. Not all of them even exist in the filesystem, but they would be consulted if they did.

~/.vim
The home directory, for personal preferences.
/usr/local/share/vim/vimfiles
A system-wide Vim directory, for preferences from the system administrator.
/usr/local/share/vim/vim81
Aka $VIMRUNTIME, for files distributed with Vim.
/usr/local/share/vim/vimfiles/after
The “after” directory in the system-wide Vim directory. This is for the system administrator to overrule or add to the distributed defaults.
~/.vim/after
The “after” directory in the home directory. This is for personal preferences to overrule or add to the distributed defaults or system-wide settings.

Because directories are processed by their order in line, the only thing that is special about the “after” directories is that they are at the end of the list. There is nothing magical about the word “after.”

When processing each directory, Vim looks for subfolders with specific names. To learn more about them, see :help runtimepath. Here is a selection of those we will be covering, with brief descriptions.

plugin/
Vim script files that are loaded automatically when editing any kind of file. Called “global plugins.”
autoload/
(Not to be confused with “plugin.”) Scripts in autoload contain functions that are loaded only when requested by other scripts.
ftdetect/
Scripts to detect filetypes. They can base their decision on filename extension, location, or internal file contents.
ftplugin/
Scripts that are executed when editing files with known type.
compiler/
Definitions of how to run various compilers or linters, and of how to parse their output. Can be shared between multiple ftplugins. Also not applied automatically, must be called with :compiler
pack/
Container for Vim 8 native packages, the successor to “Pathogen” style package management. The native packaging system does not require any third-party code.

Finally, ~/.vimrc is the catchall for general editor settings. Use it for setting defaults that can be overridden for particular file types. For a comprehensive overview of settings you can choose in .vimrc, run :options.

Third-party plugins

Plugins are simply Vim scripts that must be put into the correct places in the runtimepath in order to execute. Installing them is conceptually easy: download the file(s) into place. The challenge is that it’s hard to remove or update some plugins because they litter subdirectories in the runtimepath with their scripts, and it can be hard to tell which plugin is responsible for which files.

“Plugin managers” evolved to address this need. Vim.org has had a plugin registry going back at least as far as 2003 (as identified by the Internet Archive). However it wasn’t until about 2008 that the notion of a plugin manager really came into vogue.

These tools add plugins’ separate directories to Vim’s runtimepath, and compile help tags for plugin documentation. Most plugin managers also install and update plugin code from the internet, sometimes in parallel or with colorful progress bars.

In chronological order, here is the parade of plugin managers. I based the date ranges on earliest and latest releases of each, or when no official releases are identified, on the earliest and latest commit dates.

  • Mar 2006 - Jul 2014 : Vimball (A distribution format and associated Vim commands)
  • Oct 2008 - Dec 2015 : Pathogen (Deprecated in favor of native vim packages)
  • Aug 2009 - Dec 2009 : Vimana
  • Dec 2009 - Dec 2014 : VAM
  • Aug 2010 - Nov 2010 : Jolt
  • Oct 2010 - Nov 2012 : tplugin
  • Oct 2010 - Feb 2014 : Vundle (Discontinued after NeoBundle ripped off code)
  • Mar 2012 - Mar 2018 : vim-flavor
  • Apr 2012 - Mar 2016 : NeoBundle (Deprecated in favor of dein)
  • Jan 2013 - Aug 2017 : infect
  • Feb 2013 - Aug 2016 : vimogen
  • Oct 2013 - Jan 2015 : vim-unbundle
  • Dec 2013 - Jul 2015 : Vizardry
  • Feb 2014 - Oct 2018 : vim-plug
  • Jan 2015 - Oct 2015 : enabler
  • Aug 2015 - Apr 2016 : Vizardry 2
  • Jan 2016 - Jun 2018 : dein.vim
  • Sep 2016 - Present : native in Vim 8
  • Feb 2017 - Sep 2018 : minpac
  • Mar 2018 - Mar 2018 : autopac
  • Feb 2017 - Jun 2018 : pack
  • Mar 2017 - Sep 2017 : vim-pck
  • Sep 2017 - Sep 2017 : vim8-pack
  • Sep 2017 - May 2019 : volt
  • Sep 2018 - Feb 2019 : vim-packager
  • Feb 2019 - Feb 2019 : plugpac.vim

The first thing to note is the overwhelming variety of these tools, and the second is that each is typically active for about four years before presumably going out of fashion.

The most stable way to manage plugins is to simply use Vim 8’s built-in functionality, which requires no third-party code. Let’s walk through how to do it.

First create two directories, opt and start, within a pack directory in your runtimepath.

mkdir -p ~/.vim/pack/foobar/{opt,start}

Note the placeholder “foobar.” This name is entirely up to you. It classifies the packages that will go inside. Most people throw all their plugins into a single nondescript category, which is fine. Pick whatever name you like; I’ll continue to use foobar here. You could theoretically create multiple categories too, like ~/.vim/pack/navigation and ~/.vim/pack/linting. Note that Vim does not detect duplication between categories and will double-load duplicates if they exist.

Packages in “start” get loaded automatically, whereas those in “opt” won’t load until specifically requested in Vim with the :packadd command. Opt is good for lesser-used packages, and keeps Vim fast by not running scripts unnecessarily. Note that there isn’t a counterpart to :packadd to unload a package.

For this example we’ll add the “ctrlp” fuzzy find plugin to opt. Download and extract the latest release into place:

curl -L https://github.com/kien/ctrlp.vim/archive/1.79.tar.gz \
	| tar zx -C ~/.vim/pack/foobar/opt

That command creates a ~/.vim/pack/foobar/opt/ctrlp.vim-1.79 folder, and the package is ready to use. Back in vim, create a helptags index for the new package:

:helptags ~/.vim/pack/foobar/opt/ctrlp.vim-1.79/doc

That creates a file called “tags” in the package’s doc folder, which makes the topics available for browsing in Vim’s internal help system. (Alternately you can run :helptags ALL once the package has been loaded, which takes care of all docs in the runtimepath.)

When you want to use the package, load it (and know that tab completion works for plugin names, so you don’t have to type the whole name):

:packadd ctrlp.vim-1.79

Packadd includes the package’s base directory in the runtimepath, and sources its plugin and ftdetect scripts. After loading ctrlp, you can press CTRL-P to pop up a fuzzy find file matcher.

Some people keep their ~/.vim directory under version control and use git submodules for each package. For my part, I simply extract packages from tarballs and track them in my own repository. If you use mature packages you don’t need to upgrade them often, plus the scripts are generally small and don’t clutter git history much.

Backups and undo

Depending on user settings, Vim can protect against four types of loss:

  1. A crash during editing (between saves). Vim can protect against this one by periodically saving unwritten changes to a swap file.
  2. Editing the same file with two instances of Vim, overwriting changes from one or both instances. Swap files protect against this too.
  3. A crash during the save process itself, after the destination file is truncated but before the new contents have been fully written. Vim can protect against this with a “writebackup.” To do this, it writes to a new file and swaps it with the original on success, in a way that depends on the “backupcopy” setting.
  4. Saving new file contents but wanting the original back. Vim can protect against this by persisting the backup copy of the file after writing changes.

Before examining sensible settings, how about some comic relief? Here are just a sampling of comments from vimrc files on GitHub:

  • “Do not create swap file. Manage this in version control”
  • “Backups are for pussies. Use version control”
  • “use version control FFS!”
  • “We live in a world with version control, so get rid of swaps and backups”
  • “don’t write backup files, version control is enough backup”
  • “I’ve never actually used the VIM backup files… Use version control”
  • “Since most stuff is on version control anyway”
  • “Disable backup files, you are using a version control system anyway :)”
  • “version control has arrived, git will save us”
  • “disable swap and backup files (Always use version control! ALWAYS!)”
  • “Turn backup off, since I version control everything”

The comments reflect awareness of only the fourth case above (and the third by accident), whereas the authors generally go on to disable the swap file too, leaving one and two unprotected.

Here is the configuration I recommend to keep your edits safe:

" Protect changes between writes. Default values of
" updatecount (200 keystrokes) and updatetime
" (4 seconds) are fine
set swapfile
set directory^=~/.vim/swap//

" protect against crash-during-write
set writebackup
" but do not persist backup after successful write
set nobackup
" use rename-and-write-new method whenever safe
set backupcopy=auto
" patch required to honor double slash at end
if has("patch-8.1.0251")
	" consolidate the writebackups -- not a big
	" deal either way, since they usually get deleted
	set backupdir^=~/.vim/backup//
end

" persist the undo tree for each file
set undofile
set undodir^=~/.vim/undo//

These settings enable backups for writes-in-progress, but do not persist them after successful write because version control etc etc. Note that you’ll need to mkdir ~/.vim/{swap,undodir,backup} or else Vim will fall back to the next available folder in the preference list. You should also probably chmod the folders to keep the contents private, because the swap files and undo history might contain sensitive information.

One thing to note about the paths in our config is that they end in a double slash. That ending enables a feature to disambiguate swaps and backups for files with the same name that live in different directories. For instance the swap file for /foo/bar will be saved in ~/.vim/swap/%foo%bar.swp (slashes escaped as percent signs). Vim had a bug until a fairly recent patch where the double slash was not honored for backupdir, and we guard against that above.

We also have Vim persist the history of undos for each file, so that you can apply them even after quitting and editing the file again. While it may sound redundant with the swap file, the undo history is complementary because it is written only when the file is written. (If it were written more frequently it might not match the state of the file on disk after a crash, so Vim doesn’t do that.)

Speaking of undo, Vim maintains a full tree of edit history. This means you can make a change, undo it, then redo it differently and all three states are recoverable. You can see the times and magnitude of changes with the :undolist command, but it’s hard to visualize the tree structure from it. You can navigate to specific changes in that list, or move in time with :earlier and :later which take a time argument like 5m, or the count of file saves, like 3f. However navigating the undo tree is an instance when I think a plugin – like undotreeis warranted.

Enabling these disaster recovery settings can bring you peace of mind. I used to save compulsively after most edits or when stepping away from the computer, but now I’ve made an effort to leave documents unsaved for hours at a time. I know how the swap file works now.

Some final notes: keep an eye on all these disaster recovery files, they can pile up in your .vim folder and use space over time. Also setting nowritebackup might be necessary when saving a huge file with low disk space, because Vim must otherwise make an entire copy of the file temporarily. By default the “backupskip” setting disables backups for anything in the system temp directory.

Vim’s “patchmode” is related to backups. You can use it in directories that aren’t under version control. For instance if you want to download a source tarball, make an edit and send a patch over a mailing list without bringing git into the picture. Run :set patchmod=.orig and any file ‘foo’ Vim is about to write will be backed up to ‘foo.orig’. You can then create a patch on the command line between the .orig files and the new ones.

Include and path

Most programming languages allow you to include one module or file from another. Vim knows how to track program identifiers in included files using the configuration settings path, include, suffixesadd, and includeexpr. The identifier search (see :help include-search) is an alternative to maintaining a tags file with ctags for system headers.

The settings for C programs work out of the box. Other languages are supported too, but require tweaking. That’s outside the scope of this article, see :help include.

If everything is configured right, you can press [i on an identifier to display its definition, or [d for a macro constant. Also when you press gf with the cursor on a filename, Vim searches the path to find it and jump there. Because the path also affects the :find command, some people have the tendency to add ‘**/*’ or commonly accessed directories to the path in order to use :find like a poor man’s fuzzy finder. Doing this slows down the identifier search with directories which aren’t relevant to that task.

A way to get the same level of crappy find capability, without polluting the path, is to just make another mapping. You can then press <Leader><space> (which is typically backslash space) then start typing a filename and use tab or CTRL-D completion to find the file.

" fuzzy-find lite
nmap <Leader><space> :e ./**/

Just to reiterate: the path parameter was designed for header files. If you want more proof, there is even a :checkpath command to see whether the path is functioning. Load a C file and run :checkpath. It will display filenames it was unable to find that are included transitively by the current file. Also :checkpath! with a bang dumps the whole hierarchy of files included from the current file.

By default path has the value “.,/usr/include,,” meaning the working directory, /usr/include, and files that are siblings of the active buffer. The directory specifiers and globs are pretty powerful, see :help file-searching for the details.

In my C ftplugin (more on that later), I also have the path search for include files within the current project, like ./src/include or ./include .

setlocal path=.,,*/include/**3,./*/include/**3
setlocal path+=/usr/include

The ** with a number like **3 bounds the depth of the search in subdirectories. It’s wise to add depth bounds where you can to avoid identifier searches that lock up.

Here are other patterns you might consider adding to your path if :checkpath identifies that files can’t be found in your project. It depends on your system of course.

  • More system includes: /usr/include/**4,/usr/local/include/**3
  • Homebrew library headers: /usr/local/Cellar/**2/include/**2
  • Macports library headers: /opt/local/include/**
  • OpenBSD library headers: /usr/local/lib/\*/include,/usr/X11R6/include/\*\*3

See also: :he [, :he gf, :he :find.

Edit ⇄ compile cycle

The :make command runs a program of the user’s choice to build a project, and collects the output in the quickfix buffer. Each item in the quickfix records the filename, line, column, type (warning/error) and message of each output item. A fairly idomatic mapping uses bracket commands to move through quickfix items:

" quickfix shortcuts
nmap ]q :cnext<cr>
nmap ]Q :clast<cr>
nmap [q :cprev<cr>
nmap [Q :cfirst<cr>

If, after updating the program and rebuilding, you are curious what the error messages said last time, use :colder (and :cnewer to return). To see more information about the currently selected error use :cc, and use :copen to see the full quickfix buffer. You can populate the quickfix yourself without running :make with :cfile, :caddfile, or :cexpr.

Vim parses output from the build process according to the errorformat string, which contains scanf-like escape sequences. It’s typical to set this in a “compiler file.” For instance, Vim ships with one for gcc in $VIMRUNTIME/compiler/gcc.vim, but has no compiler file for clang. I created the following definition for ~/.vim/compiler/clang.vim:

" formatting variations documented at
" https://clang.llvm.org/docs/UsersManual.html#formatting-of-diagnostics
"
" It should be possible to make this work for the combination of
" -fno-show-column and -fcaret-diagnostics as well with multiline
" and %p, but I was too lazy to figure it out.
"
" The %D and %X patterns are not clang per se. They capture the
" directory change messages from (GNU) 'make -w'. I needed this
" for building a project which used recursive Makefiles.

CompilerSet errorformat=
	\%f:%l%c:{%*[^}]}{%*[^}]}:\ %trror:\ %m,
	\%f:%l%c:{%*[^}]}{%*[^}]}:\ %tarning:\ %m,
	\%f:%l:%c:\ %trror:\ %m,
	\%f:%l:%c:\ %tarning:\ %m,
	\%f(%l,%c)\ :\ %trror:\ %m,
	\%f(%l,%c)\ :\ %tarning:\ %m,
	\%f\ +%l%c:\ %trror:\ %m,
	\%f\ +%l%c:\ %tarning:\ %m,
	\%f:%l:\ %trror:\ %m,
	\%f:%l:\ %tarning:\ %m,
	\%D%*\\a[%*\\d]:\ Entering\ directory\ %*[`']%f',
	\%D%*\\a:\ Entering\ directory\ %*[`']%f',
	\%X%*\\a[%*\\d]:\ Leaving\ directory\ %*[`']%f',
	\%X%*\\a:\ Leaving\ directory\ %*[`']%f',
	\%DMaking\ %*\\a\ in\ %f

CompilerSet makeprg=make

To activate this compiler profile, run :compiler clang. This is typically done in an ftplugin file.

Another example is running GNU Diction on a text document to identify wordy and commonly misused phrases in sentences. Create a “compiler” called diction.vim:

CompilerSet errorformat=%f:%l:\ %m
CompilerSet makeprg=diction\ -s\ %

After you run :compiler diction you can use the normal :make command to run it and populate the quickfix. The final mild convenience in my .vimrc is a mapping to run make:

" real make
map <silent> <F5> :make<cr><cr><cr>
" GNUism, for building recursively
map <silent> <s-F5> :make -w<cr><cr><cr>

Diffs and patches

Vim’s internal diffing is powerful, but it can be daunting, especially the three-way merge view. In reality it’s not so bad once you take time to study it. The main idea is that every window is either in or out of “diff mode.” All windows put in diffmode (with :difft[his]) get compared with all other windows already in diff mode.

For example, let’s start simple. Create two files:

echo "hello, world" > h1
echo "goodbye, world" > h2

vim h1 h2

In vim, split the arguments into their own windows with :all. In the top window, for h1, run :difft. You’ll see a gutter appear, but no difference detected. Move to the other window with CTWL-W CTRL-W and run :difft again. Now hello and goobye are identified as different in the current chunk. Continuing in the bottom window, you can run :diffg[et] to get “hello” from the top window, or :diffp[ut] to send “goodbye” into the top window. Pressing ]c or [c would move between chunks if there were more than one.

A shortcut would be running vim -d h1 h2 instead (or its alias, vimdiff h1 h2) which applies :difft to all windows. Alternatively, load just h1 with vim h1 and then :diffsplit h2. Remember that fundamentally these commands just load files into windows and set the diff mode.

With these basics in mind, let’s learn to use Vim as a three-way mergetool for git. First configure git:

git config merge.tool vimdiff
git config merge.conflictstyle diff3
git config mergetool.prompt false

Now, when you hit a merge conflict, run git mergetool. It will bring Vim up with four windows. This part looks scary, and is where I used to flail around and often quit in frustration.

+-----------+------------+------------+
|           |            |            |
|           |            |            |
|   LOCAL   |    BASE    |   REMOTE   |
+-----------+------------+------------+
|                                     |
|                                     |
|             (edit me)               |
+-------------------------------------+

Here’s the trick: do all the editing in the bottom window. The top three windows simply provide context about how the file differs on either side of the merge (local / remote), and how it looked prior to either side doing any work (base).

Move within the bottom window with ]c, and for each chunk choose whether to replace it with text from local, base, or remote – or whether to write in your own change which might combine parts from several.

To make it easier to pull changes from the top windows, I set some mappings in my vimrc:

" shortcuts for 3-way merge
map <Leader>1 :diffget LOCAL<CR>
map <Leader>2 :diffget BASE<CR>
map <Leader>3 :diffget REMOTE<CR>

We’ve already seen :diffget, and here our bindings pass an argument of the buffer name that identifies which window to pull from.

Once done with the merge, run :wqa to save all the windows and quit. If you want to abandon the merge instead, run :cq to abort all changes and return an error code to the shell. This will signal to git that it should ignore your changes.

Diffget can also accept a range. If you want to pull in all changes from one of the top windows rather than working chunk by chunk, just run :1,$+1diffget {LOCAL,BASE,REMOTE}. The “+1” is required because there can be deleted lines “below” the last line of a buffer.

The three-way marge is fairly easy after all. There’s no need for plugins like Fugitive, at least for presenting a simplified view for resolving merge conflicts.

Finally, as of patch 8.1.0360, Vim is bundled with the xdiff library and can create diffs internally. This can be more efficient than shelling out to an external program, and allows for a choice of diff algorithms. The “patience” algorithm often produces more human-readable output than the default, “myers.” Set it in your .vimrc like so:

if has("patch-8.1.0360")
	set diffopt+=internal,algorithm:patience
endif

Buffer I/O

See if this sounds familiar: you’re editing a buffer and want to save it as a new file, so you :w newname. After editing some more, you :w, but it writes over the original file. What you want for this scenario is :saveas newname, which does the write but also changes the filename of the buffer for future writes. Alternately, the :file newname command will change the filename without doing a write.

It also pays off to learn more about the read and write commands. Becuase r and w are Ex commands, they work with ranges. Here are some variations you might not know about:

:w >>foo append the whole buffer to a file
:.w >>foo append current line to a file
:$r foo read foo into the end of the buffer
:0r foo read foo into the start, moving existing lines down
:.,$w foo write current line and below to a file
:r !ls read ls output into cursor position
:w !wc send buffer to wc and display output
:.!tr ‘A-Za-z’ ‘N-ZA-Mn-za-m’ apply ROT-13 to current line
:w|so % chain commands: write and then source buffer
:e! throw away unsaved changes, reload buffer
:hide edit foo edit foo, hide current buffer if dirty

Useless fun fact: we piped a line to tr in an example above to apply a ROT-13 cypher, but Vim has that functionality built in with the the g? command. Apply it to a motion, like g?$.

Filetypes

Filetypes are a way to change settings based on the type of file detected in a buffer. They don’t need to be automatically detected though, we can manually enable them to interesting effect. An example is doing hex editing. Any file can be viewed as raw hexadecimal values. GitHub user the9ball created a clever ftplugin script that filters a buffer back and forth through the xxd utility for hex editing.

The xxd utility was bundled as part of Vim 5 for convenience. The Vim todo.txt file mentions they want to make it more seamless to edit binary files, but xxd can take us pretty far.

Here is code you can put in ~/.vim/ftplugin/xxd.vim. Its presence in ftplugin means Vim will execute the script when filetype (aka “ft”) becomes xxd. I added some basic comments to the script.

" without the xxd command this is all pointless
if !executable('xxd')
	finish
endif

" don't insert a newline in the final line if it
" doesn't already exist, and don't insert linebreaks
setlocal binary noendofline
silent %!xxd -g 1
%s/\r$//e

" put the autocmds into a group for easy removal later
augroup ftplugin-xxd
	" erase any existing autocmds on buffer
	autocmd! * <buffer>

	" before writing, translate back to binary
	autocmd BufWritePre <buffer> let b:xxd_cursor = getpos('.')
	autocmd BufWritePre <buffer> silent %!xxd -r

	" after writing, restore hex view and mark unmodified
	autocmd BufWritePost <buffer> silent %!xxd -g 1
	autocmd BufWritePost <buffer> %s/\r$//e
	autocmd BufWritePost <buffer> setlocal nomodified
	autocmd BufWritePost <buffer> call setpos('.', b:xxd_cursor) | unlet b:xxd_cursor

	" update text column after changing hex values
	autocmd TextChanged,InsertLeave <buffer> let b:xxd_cursor = getpos('.')
	autocmd TextChanged,InsertLeave <buffer> silent %!xxd -r
	autocmd TextChanged,InsertLeave <buffer> silent %!xxd -g 1
	autocmd TextChanged,InsertLeave <buffer> call setpos('.', b:xxd_cursor) | unlet b:xxd_cursor
augroup END

" when filetype is set to no longer be "xxd," put the binary
" and endofline settings back to what they were before, remove
" the autocmds, and replace buffer with its binary value
let b:undo_ftplugin = 'setl bin< eol< | execute "au! ftplugin-xxd * <buffer>" | execute "silent %!xxd -r"'

Try opening a file, then running :set ft. Note what type it is. Then:set ft=xxd. Vim will turn into a hex editor. To restore your view, :set ft=foo where foo was the original type. Note that in hex view you even get syntax highlighting because $VIMRUNTIME/syntax/xxd.vim ships with Vim by default.

Notice the nice use of “b:undo_ftplugin” which is an opportunity for filetypes to clean up after themselves when the user or ftdetect mechanism switches away from them to another filetype. (The example above could use a little work because if you :set ft=xxd then set it back, the buffer is marked as modified even if you never changed anything.)

Ftplugins also allow you to refine an existing filetype. For instance, Vim already has some good defaults for C programming in $VIMRUNTIME/ftplugin/c.vim. I put these extra options in ~/.vim/after/ftplugin/c.vim to add my own settings on top:

" the smartest indent engine for C
setlocal cindent
" my preferred "Allman" style indentation
setlocal cino="Ls,:0,l1,t0,(s,U1,W4"

" for quickfix errorformat
compiler clang
" shows long build messages better
setlocal ch=2

" auto-create folds per grammar
setlocal foldmethod=syntax
setlocal foldlevel=10

" local project headers
setlocal path=.,,*/include/**3,./*/include/**3
" basic system headers
setlocal path+=/usr/include

setlocal tags=./tags,tags;~
"                      ^ in working dir, or parents
"                ^ sibling of open file

" the default is menu,preview but the preview window is annoying
setlocal completeopt=menu

iabbrev #i #include
iabbrev #d #define
iabbrev main() int main(int argc, char **argv)

" add #include guard
iabbrev #g _<c-r>=expand("%:t:r")<cr><esc>VgUV:s/[^A-Z]/_/g<cr>A_H<esc>yypki#ifndef <esc>j0i#define <esc>o<cr><cr>#endif<esc>2ki

Notice how the script uses “setlocal” rather than “set.” This applies the changes to just the current buffer rather than the whole Vim instance.

This script also enables some light abbreviations. Like I can type #g and press enter and it adds an include guard with the current filename:

#ifndef _FILENAME_H
#define _FILENAME_H

/* <-- cursor here */

#endif

You can also mix filetypes by using a dot (“.”). Here is one application. Different projects have different coding conventions, so you can combine your default C settings with those for a particular project. The OpenBSD source code follows the style(9) format, so let’s make a special openbsd filetype. Combine the two filetypes with :set ft=c.openbsd on relevant files.

To detect the openbsd filetype we can look at the contents of buffers rather than just their extensions or locations on disk. The telltale sign is that C files in the OpenBSD source contain /* $OpenBSD: in the first line.

To detect them, create ~/.vim/after/ftdetect/openbsd.vim:

augroup filetypedetect
        au BufRead,BufNewFile *.[ch]
                \  if getline(1) =~ 'OpenBSD;'
                \|   setl ft=c.openbsd
                \| endif
augroup END

The Vim port for OpenBSD already includes a special syntax file for this filetype: /usr/local/share/vim/vimfiles/syntax/openbsd.vim. If you recall, the /usr/local/share/vim/vimfiles directory is in the runtimepath and is set aside for files from the system administrator. The provided openbsd.vim script includes a function:

function! OpenBSD_Style()
	setlocal cindent
	setlocal cinoptions=(4200,u4200,+0.5s,*500,:0,t0,U4200
	setlocal indentexpr=IgnoreParenIndent()
	setlocal indentkeys=0{,0},0),:,0#,!^F,o,O,e
	setlocal noexpandtab
	setlocal shiftwidth=8
	setlocal tabstop=8
	setlocal textwidth=80
endfun

We simply need to call the function at the appropriate time. Create ~/.vim/after/ftplugin/openbsd.vim:

call OpenBSD_Style()

Now opening any C or header file with the characteristic comment at the top will be recognized as type c.openbsd and will use indenting options that conform with the style(9) man page.

Don’t forget the mouse

This is a friendly reminder that despite our command-line machismo, the mouse is in fact supported in Vim, and can do some things more easily than the keyboard. Mouse events work even over SSH thanks to xterm turning mouse events into stdin escape codes.

To enable mouse support, set mouse=n. Many people use mouse=a to make it work in all modes, but I prefer to enable it only in normal mode. This avoids creating visual selections when I click links with a keyboard modifier to open them in my browser.

Here are things the mouse can do:

  • Open or close folds (when foldcolumn > 0).
  • Select tabs (beats gt gt gt…)
  • Click to complete a motion, like d<click!>. Similar to the easymotion plugin but without any plugin.
  • Jump to help topics with double click.
  • Drag the status line at the bottom to change cmdheight.
  • Drag edge of window to resize.
  • Scroll wheel.

Misc editing

This section could be enormous, but I’ll stick to a few tricks I learned. The first one that blew me away was :set virtualedit=all. It allows you to move the cursor anywhere in the window. If you enter characters or insert a visual block, Vim will add whatever spaces are required to the left of the inserted characters to keep them in place. Virtual edit mode makes it simple to edit tabular data. Turn it off with :set virtualedit=.

Next are some movement commands. I used to rely a lot on } to jump by paragraphs, and just muscle my way down the page. However the ] character makes more precise motions: by function ]], scope ]}, paren ‘])’, comment ]/, diff block ]c. This series is why the quickfix mapping ]q mentioned earlier fits the pattern so well.

For big jumps I used to try things like 1000j, but in normal mode you can actually just type a percentage and Vim will go there, like 50%. Speaking of scroll percentage, you can see it at any time with CTRL-G. Thus I now do :set noruler and ask to see the info as needed. It’s less cluttered. Kind of the opposite of the trend of colorful patched font powerlines.

After jumping around between tags, files, or within a file, there are some commands to get your bearings. Try :ls, :tags, :jumps, and :marks. Jumping through tags actually creates a stack, and you can press CTRL-T to pop one back. I used to always press CTRL-O to back out of jumps, but it is not as direct as popping the tag stack.

In a project directory that has been indexed with ctags, you can open the editor directly to a tag with -t, like vim -t main. To find tags files more flexibly, set the tags configuration variable. Note the semicolon in the example below that allows Vim to search the current directory upward to the home directory. This way you could have a more general system tags file outside the project folder.

set tags=./tags,**5/tags,tags;~
"                          ^ in working dir, or parents
"                   ^ in any subfolder of working dir
"           ^ sibling of open file

There are some buffer tricks too. Switching to a buffer with :bu can take a fragment of the buffer name, not just a number. Sometimes it’s harder to memorize those numbers than remember the name of a source file. You can navigate buffers with marks too. If you use a capital letter as the name of a mark, you can jump to it across buffers. You could set a mark H in a header, C in a source file, and M in a Makefile to go from one buffer to another.

Do you ever get mad after yanking a word, deleting a word somewhere else, trying paste the first word in, and then discovering your original yank is overwritten? The Vim registers are underappreciated for this. Inspect their contents with :reg. As you yank text, previous yanks are rotated into the registers "0 - "9. So "0p pastes the next-to-last yank/deletion. The special registers "+ and "* can copy/paste from/to the system clipboard. They usually mean the same thing, except in some X11 setups that distinguish primary and secondary selection.

Another handy hidden feature is the command line window. It it’s a buffer that contains your previous commands and searches. Bring it up with q: or q/. Once inside you can move to any line and press enter to run it. However you can also edit any of the lines before pressing enter. Your changes won’t affect the line (the new command will merely be added to the bottom of the list).

This article could go on and on, so I’m going to call it here. For more great topics, see these help sections: views-sessions, viminfo, TOhtml, ins-completion, cmdline-completion, multi-repeat, scroll-cursor, text-objects, grep, netrw-contents.

vim logo

vim logo

July 18, 2019

Awn Umar (awn)

encrypting secrets in memory July 18, 2019 12:00 AM

In a previous post I talked about designing and implementing an in-memory data structure for storing sensitive information in Go. The latest version of memguard adds something new: encryption.

But why? Well, there are limitations to the old solution of using guarded heap allocations for everything.

  1. A minimum of three memory pages have to be allocated for each value: two guard pages sandwiching n≥1 n \geq 1 n1 data pages.
  2. Some systems impose an upper limit on the amount of memory that an individual process is able to prevent from being swapped out to disk.

Memory layout of guarded heap allocation.

Typical layout of a 32 byte guarded heap allocation.

So it is worth looking into the use of encryption to protect information. After all, ciphertext does not have to be treated with much care and authentication guarantees immutability for free. The problem of recovering a secret is shifted to recovering the key that protects it.

But there is the obvious problem. Where and how do you store the encryption key?

We use a scheme described by Bruce Schneier in Cryptography Engineering. The procedure is sometimes referred to as a Boojum. I will formally define it below for convenience.

Define B={i:0≤i≤255} B = \{i : 0 \leq i \leq 255\} B={i:0i255} to be the set of values that a byte may take. Define h:Bn→B32 h : B^n \to B^{32} h:BnB32 for n∈N0 n \in \mathbb{N}_0 nN0 to be a cryptographically-secure hash function. Define the binary XOR operator ⊕:Bn×Bn→Bn \oplus : B^n \times B^n \to B^n :Bn×BnBn.

Suppose k∈B32 k \in B^{32} kB32 is the key we want to protect and Rn∈B32 R_n \in B^{32} RnB32 is some random bytes sourced from a suitable CSPRNG. We initialise the two partitions:

x1=R1y1=h(x1)⊕k \begin{aligned} x_1 &= R_1\\ y_1 &= h(x_1) \oplus k\\ \end{aligned} x1y1=R1=h(x1)k

storing each inside its own guarded heap allocation. Then every m m m milliseconds we overwrite each value with:

xn+1=xn⊕Rn+1=R1⊕R2⊕⋯⊕Rn+1yn+1=yn⊕h(xn)⊕h(xn+1)=h(xn)⊕h(xn)⊕h(xn+1)⊕k=h(xn+1)⊕k \begin{aligned} x_{n+1} &= x_n \oplus R_{n+1}\\ &= R_1 \oplus R_2 \oplus \cdots \oplus R_{n+1}\\ y_{n+1} &= y_n \oplus h(x_n) \oplus h(x_{n+1})\\ &= h(x_n) \oplus h(x_n) \oplus h(x_{n+1}) \oplus k\\ &= h(x_{n+1}) \oplus k\\ \end{aligned} xn+1yn+1=xnRn+1=R1R2Rn+1=ynh(xn)h(xn+1)=h(xn)h(xn)h(xn+1)k=h(xn+1)k

and so on. Then by the properties of XOR,

k=h(xn)⊕yn=h(xn)⊕h(xn)⊕k=0⊕k=k \begin{aligned} k &= h(x_n) \oplus y_n\\ &= h(x_n) \oplus h(x_n) \oplus k\\ &= 0 \oplus k\\ &= k\\ \end{aligned} k=h(xn)yn=h(xn)h(xn)k=0k=k

It is clear from this that our iterative overwriting steps do not affect the value of k k k, and the proof also gives us a way of retrieving k k k. My own implementation of the protocol in fairly idiomatic Go code is available here.

An issue with the Boojum scheme is that it has a relatively high overhead from two guarded allocations using six memory pages in total, and we have to compute and write 64 bytes every m m m milliseconds. However we only store a single global key, and the overhead can be tweaked by scaling m m m as needed. Its value at the time of writing is 8 milliseconds.

The authors of the Boojum claim that it defends against cold boot attacks, and I would speculate that there is also some defence against side-channel attacks due to the fact that k k k is split across two different locations in memory and each partition is constantly changing. Those attacks usually have an error rate and are relatively slow.

OpenBSD added a somewhat related mitigation to their SSH implementation that stores a 16 KiB (static) “pre-key” that is hashed to derive the final key when it is needed. I investigated incorporating it somehow but decided against it. Both schemes have a weak point when the key is in “unlocked” form so mimimising this window of opportunity is ideal.

In memguard the key is initialised when the program starts and then hangs around in the background—constantly flickering—until it is needed. When some data needs to be encrypted or decrypted, the key is unlocked and used for the operation and then it is destroyed.

Diagram showing the high-level structure of the scheme.

High-level overview of the encryption scheme.

The documentation provides a relatively intuitive guide to the package’s functionality. The Enclave stores ciphertext, the LockedBuffer stores plaintext, and core.Coffer implements the Boojum. Examples are available in the examples sub-package.

The most pressing issue at the moment is that the package relies on cryptographic primitives implemented by the Go standard library which does not secure its own memory and may leak values that are passed to it. There has been some discussion about this but for now it seems as though rewriting crucial security APIs to use specially allocated memory is the only feasible solution.

If you have any ideas and wish to contribute, please do get in touch or open a pull request.

Pepijn de Vos (pepijndevos)

VHDL to PCB July 18, 2019 12:00 AM

When learning to program FPGAs using VHDL or Verilog, you also learn that these hardware description languages can be used to design ASICs (application specific integrated circuit). But this is only something big corporations with millions of dollars can afford, right? Even though I later learned it only costs thousands, not millions, to make an ASIC on an older process, it is still far away from hobby budgets.

I had been keeping an eye on Yosys, the open source HDL synthesis tool, which can apparently do ASIC by giving it a liberty file that specifies the logic cells your foundry supports. Meanwhile I also toyed with the idea of making a 7400 series computer, and I wondered if you could write a liberty file for 7400 chips. I had kind of dismissed the idea, but then ZirconiumX came along and did it.

It suffices to say this revived my interest in the idea and a lively discussion and many pull requests followed. First some small changes, then simulations to verify the synthesized result is still correct, and finally a KiCad netlist generator.

You see, generating a Yosys netlist is nice, but eventually these 7400 chips have to end up on a PCB somehow. Normally you draw your schematic in Eeschema, generate a netlist, and import that to Pcbnew. But instead I used skidl to generate the netlist directly. Then all there is to do is add the inputs and outputs and run the autorouter (or do it manually of course).

I decided to do a proof-of-concept “application specific interconnected circuit”, with the goal of making something fun in under 10 chips. (a Risc-V CPU currently sits at about 850) I settled on a fading PWM circuit to drive an LED. I manually added a 555-based clock, and ordered a PCB for a few bucks. A few weeks later, this was the result. It worked on the first try! This feeling is even more amazing than with software, and shows that as long as there are no compiler/library bugs or DRC errors, logic simulations are a good way to prove your PCB design.

To follow along at home you need to install Yosys. A recent release might work, but it’s getting better every day, so building from source is recommended. Then you can just git clone 74xx-liberty and go. There are a number of Verilog programs in benchmarks in case you’d rather make PicoRV32 in 7400 chips.

cd stat
make pwmled.stat # synthesize and run stat
../ic_count.py pwmled.stat # count number of chips used
cd ../sim
make pwmled.vcd # synth to low-level verilog and simulate
gtkwave pwmled.vcd # show test bench results
cd ../kicad
make pwmled.net # generate kicad netlist

But this was all done in Verilog, so where is the VHDL, you might wonder. Well, Yosys does not really support VHDL yet, but Tristan Gingold is hard at work making GHDL synthesize VHDL as a Yosys plugin. I think this is very important work, so I’ve been contributing there as well. After some pull requests I was able to port the breathing LED to VHDL.

Getting VHDL to work in Yosys is a bit of effort. First you need to compile GHDL, which requires installing a recent version of GNAT. Then you need to install ghdlsynth-beta as a plugin, allowing you to run yosys -m ghdl. My fork of 74xx-liberty contains additional make rules for doing the above synthesization for VHDL files, which does something like this before calling the 7400 synthesis script.

cd stat
ghdl -a ../benchmarks/pwmled.vhd # analyse VHDL file
yosys -m ghdl -p "ghdl pwmled; show" # load pwmled entity, show graph

yosys dot graph

A huge thank you to all the people working tirelessly to make open source hardware design a reality. You’re awesome!

July 16, 2019

Jeremy Morgan (JeremyMorgan)

Forget What You've Heard, Now Is the Best Time to Become a Coder July 16, 2019 04:15 PM

Do you want to be a coder? Are you on the fence about trying it? Nervous to get started?

The time is now. Time to pull the trigger. 

There has never been a better time to become a coder.

And I'm going to tell you how to get started.  

My History as a Coder

I started writing code professionally in 2002. Before that, I was building (terrible) websites for myself and friends. I even ran a business for a few years crapping out HTML/Perl sites in the late 90s for good money. I built shoddy software in Visual Basic for businesses.

To learn how to do this, I bought expensive books on HTML, Perl, Visual Basic and Unix. I read them cover to cover and marked them up. Google didn’t exist yet. I consulted Usenet news groups where they insulted you for asking "stupid questions" and figured out most of the stuff the hard way. I learned Unix so I could host my customer's sites and destroyed a lot of stuff before figuring it out. 

I was living in a rural area and had very few friends that were even in to this kind of thing so I had few people to bounce ideas and questions off of. I hadn't yet gone to college and met other technical folks. 

I'm not telling you this so you'll think I'm awesome or some kind of grizzled veteran who is better than you. It was hard, and I wasn't very good for many years, but I still got the job done. I spent countless nights chugging Surge and hacking away trying to figure out things. I got good. I figured things out. But it sucked.

 

Your History

Your story doesn't have to go that way. The world is at your fingertips. You have all the information to get where I am right in front of you. Some grizzled veterans like myself say things like "new developers don't even have to work for it" and pretend like we're so much better because of our struggles. Not true. In reality because you have this information readily available, you'll get better faster. The developer I was at the 3 year mark will be a rookie compared to you.


Today's beginning developers are positioned to be the best generation of developers yet.

That's part of the reason that NOW is the best time to start. You have Google, Stack Overflow, Dev.to, Social Media, Pluralsight, YouTube, you name it. NOW is exponentially better than 1995 as far a starting point goes. 

The Market: We need developers! 

Another great reason now is the time: we are in a talent shortage. Everyone I talk to is looking for developers. Junior, Senior, mid-level, whatever. Can you code? You'll get a job. It doesn't even have to be in the language they're using. 

In the mid 2000s being a developer I had to know everything under the sun to get in the door to interview. They had crowds lined up fighting for every job. In 2019 if you learn the basics, throw some projects on GitHub and start sending out resumes you'll get that call. 

According to code.org, there are 504,199 open computing jobs nationwide. There were 63,744 computer science graduates entering the workforce last year. 

There are more jobs available than coders to fill them.

The numbers are in your favor. 

What do you need? 

  • Do you need a Computer Science Degree? : No
  • Do you need expensive training? No
  • Do you need a MacBook Pro?: No 

You don't need any of these things. If you want to get started, you can do it with a ChromeBook. 

For instance: 

Want to learn JavaScript?

  1. Go to JavaScript.Info and start from the beginning. 
  2. Create an account at JSFiddle and start hacking away. 

Don't worry about details like getting your own website, server, etc. Just do it.   Want to learn some Python? You can install it on your old Dell laptop and start going. CSS? PHP? C#? These are available to download free and will run on nearly any machine built in the last 10 years. 

At some point you will want a faster computer and more in depth training. But to get started this is all you'll need. 

Next Steps

Disclaimer: So, remember how I talked about how hard it was to learn tech? It's what drives me now. This stage of my career is dedicated towards teaching people, and improving their tech skills. In the interest of transparency I work for a company called Pluralsight which is the leading tech skills platform. This is not the only reason I recommend them, I was a customer long before becoming and employee and it's boosted my career tremendously.

So for next steps after taking some tutorials and getting your feet wet, do the following to break into this industry. 

Phase 1

  • Determine what you want to develop (Web, Mobile, Desktop)
  • Find as many tutorials as you can on the subject.  (You can contact me if you need help with this)
  • Create a GitHub account
  • Start uploading code samples you build with tutorials

Phase 2

  • Start a project -No matter how stupid it seems. Make an app to store your favorite jokes, or todo lists.
  • Create a HackerRank account - Start tackling some puzzles. 
  • Pick problems from Project Euler and write code to solve them. 
  • Get to know Stack Overflow - Search it when you have a problem and answer questions if you know it!

Once you get to this point, you'll start feeling comfortable. Keep working and improving your craft. Dig deeper on the tech you've chosen. Build things, even small things. Then think about getting your first job. 

Phase 3

And many more. If you're targeting a specific role, this is a great way to get your skills up. 

  • Build something useful - Build something that solves a problem for you, a friend or your employer. Nothing teaches like you like building something real.
  • Start teaching - Nothing helps you learn a subject like teaching it. When you get comfortable with your knowledge, share it.

Conclusion

Stop making excuses and don't listen to people who tell you you can't or shouldn't learn to code. If you want it bad enough you can do it and get paid for it. Teaching others is a deep passion of mine so if you get stuck on something or need some advice, hit me up I'd be glad to help. 

Now start learning!  



What is your DevOps IQ?

what's your devops score


My Devops Skill IQ is 232. Can you beat it? Take the test now to find out your Devops IQ score!!

Bogdan Popa (bogdan)

Announcing deta July 16, 2019 07:00 AM

I just released the first version of deta, a Racket library for mapping between database tables and structs. A bit like Elixir’s Ecto.

You can install it from the package server with:

raco pkg install deta

Check it out!

July 15, 2019

Derek Jones (derek-jones)

2019 in the programming language standards’ world July 15, 2019 08:26 PM

Last Tuesday I was at the British Standards Institute for a meeting of IST/5, the committee responsible for programming language standards in the UK.

There has been progress on a few issues discussed last year, and one interesting point came up.

It is starting to look as if there might be another iteration of the Cobol Standard. A handful of people, in various countries, have started to nibble around the edges of various new (in the Cobol sense) features. No, the INCITS Cobol committee (the people who used to do all the heavy lifting) has not been reformed; the work now appears to be driven by people who cannot let go of their involvement in Cobol standards.

ISO/IEC 23360-1:2006, the ISO version of the Linux Base Standard, has been updated and we were asked for a UK position on the document being published. Abstain seemed to be the only sensible option.

Our WG20 representative reported that the ongoing debate over pile of poo emoji has crossed the chasm (he did not exactly phrase it like that). Vendors want to have the freedom to specify code-points for use with their own emoji, e.g., pineapple emoji. The heady days, of a few short years ago, when an encoding for all the world’s character symbols seemed possible, have become a distant memory (the number of unhandled logographs on ancient pots and clay tablets was declining rapidly). Who could have predicted that the dream of a complete encoding of the symbols used by all the world’s languages would be dashed by pile of poo emoji?

The interesting news is from WG9. The document intended to become the Ada20 standard was due to enter the voting process in June, i.e., the committee considered it done. At the end of April the main Ada compiler vendor asked for the schedule to be slipped by a year or two, to enable them to get some implementation experience with the new features; oops. I have been predicting that in the future language ‘standards’ will be decided by the main compiler vendors, and the future is finally starting to arrive. What is the incentive for the GNAT compiler people to pay any attention to proposals written by a bunch of non-customers (ok, some of them might work for customers)? One answer is that Ada users tend to be large bureaucratic organizations (e.g., the DOD), who like to follow standards, and might fund GNAT to implement the new document (perhaps this delay by GNAT is all about funding, or lack thereof).

Right on cue, C++ users have started to notice that C++20’s added support for a system header with the name version, which conflicts with much existing practice of using a file called version to contain versioning information; a problem if the header search path used the compiler includes a project’s top-level directory (which is where the versioning file version often sits). So the WG21 committee decides on what it thinks is a good idea, implementors implement it, and users complain; implementors now have a good reason to not follow a requirement in the standard, to keep users happy. Will WG21 be apologetic, or get all high and mighty; we will have to wait and see.

July 14, 2019

Ponylang (SeanTAllen)

Last Week in Pony - July 14, 2019 July 14, 2019 11:06 AM

Last Week In Pony is a weekly blog post to catch you up on the latest news for the Pony programming language. To learn more about Pony check out our website, our Twitter account @ponylang, or our Zulip community.

Got something you think should be featured? There’s a GitHub issue for that! Add a comment to the open “Last Week in Pony” issue.

July 13, 2019

eta (eta)

Building the Physics Penitentiary July 13, 2019 11:00 PM

(Edited 2019-08-06: added links to download the source code, and made a few minor language changes.)

I “recently”1 took part in the Weizmann Institute’s international “safe-cracking” tournament, as part of a team of five people2 who placed highly enough in the UK version of the tournament to get to fly over to Israel and have a shot it it over there. Since I did most/all of the electronics and control system for our entry, I thought I’d write a blog post about it!

Essentially, the idea behind the competition is to build ‘safes’ - essentially boxes with physics-related puzzles and challenges that must be completed in order to ‘crack’ the safe, with the point being that only an understanding of the physics involved would let you unlock it. Our safe, the Physics Penitentiary3, looked a bit like this:

Picture of the safe

Safe background

So, I’m not a physicist by any means (which led to some fun discussions about what a random CS student was doing taking part in a physics competition, but we’ll gloss over that…), which means I have only a loose understanding of the physics. That aside, what was in the safe roughly looked like:

  • one Crookes radiometer / light mill (the weird glass thing in the very centre, surrounded by a cage)
  • a pump, hooked up to a container full of ethanol, arranged to deposit said ethanol on the radiometer
  • a soundproof(-ish) section to the right, with two speakers facing one another, plus a microphone
  • a control panel with lots of fun buttons for the user to press
  • a tube with two servo-controlled flags blocking it, which was used to dispense marbles (‘prisoners’) upon successful completion of the challenges

In addition to this, you were allowed:

  • two torches, one with a tungsten bulb, one LED
  • a canister of compressed air

With this, you had to ‘crack’ the safe by figuring out how the Crookes radiometer worked - essentially, due to some physicsey explanation I don’t quite comprehend4, the radiometer would spin if it was heated (e.g. by shining an inefficient tungsten bulb on it), and would also spin, but in the opposite direction, if you cooled it again (e.g. by inverting the canister of compressed air to get at the incredibly cold gas and spraying that on the surface of the radiometer). We also added the sound-based challenge seen to the right, where you had to get the two sound waves to match frequency and wave type (e.g. sine, square, or triangle) in order to cancel each other out5 - to make it a bit harder, and also to fit with the ‘prison’ theme.

Radiometer spinning

Above: nifty GIF of the radiometer doing its thing. Image courtesy of Nevit Dilmen on Wikimedia.

And that was all well and good; the thing spun one way if you used the light, and spun the other way if you used the inverted gas canister.

However, one important detail is still missing: how did the safe actually detect which way the radiometer was spinning, if at all? In planning, we sort of hand-waved this issue away with vague talk of ‘lasers’, ‘infrared’ and ‘proximity sensor hackery’, but it turned out that detecting the spin direction was one of the hardest challenges we had to solve - and the solution was nearly all in code as well, which makes it interesting to talk about!

You spin me right round, baby, right round

So, we had a problem. The problem was made harder by a number of restrictions:

  • The radiometer was a completely sealed partial vacuum chamber, so we couldn’t get in there to add sensors or machinery of any sort. Anything we’d want to use to detect the spin would have to be outside.
  • It’s curved glass, so using a laser, IR beam or something like that probably wouldn’t work too well (though we didn’t test it) due to the light being bent (i.e. refraction).
  • The method used to cool the radiometer (shooting freezing cold liquid at the glass through the grating at the top) caused it to frost over with an opaque white layer of ice. If any sensors were in the way of this blast, they probably wouldn’t have fared too well.
    • To remedy this issue, the ethanol dispenser would let you spray ethanol on the glass to defrost it, making it clear again - although, of course, this also meant the sensors had random extra liquids to contend with…

In the first revision of the safe (the one that we presented at the UK finals), this was accomplished using a generic chinesium proximity sensor we managed to procure from eBay pointed at the radiometer, and some very very questionable heuristics - to give you an idea, we only managed to get the heuristics to work semi-reliably on the very day of the competition itself, and even then, the output of the sensor detection algorithm was fed into an array of 5 recent readings, and we’d take the majority value out of these five to smooth out the inconsistencies (!)6. Very little science was involved in getting this to work; we7 just threw code at the problem until something semi-worked.

Needless to say, such a solution was not going to fly (quite literally8) at the international competition, so, after our miraculous success at the national finals, we needed to think of something else.

The solution we eventually settled on involved using a Vishay VCNL4010 proximity sensor (and SV-Zanshin’s wonderful Arduino library). This sensor had a number of nice features that made us choose it:

  • The actual sensor block (the small black square in the linked image) was very tiny - in fact, just smaller than one of the blades of the radiometer. This meant we could position it such that the block lined up exactly with the blades, so we’d get actually decent readings.
  • It had a built-in ambient light sensor, and combined this data with the IR proximity data to avoid ambient light messing up the sensor readings, which was an issue with the old sensor. As a bonus, this feature actually worked - shining a light on the radiometer didn’t seem to affect the sensor at all!

Sensor closeup

Above: one of the pictures we took of the sensor alignment during testing. Note the neat use of braided sheathing to make cable management not painful…

It’s graph time!

With the actual hardware sorted out, as best we could (positioning the sensor was painstakingly hard, and required multiple attempts to get it to work properly), the next remaining challenge was actually making sense of the proximity data.

Of course, a proximity sensor only gives you a reading indicating how near or far something is, in arbitrary units; you still need to do some processing to figure out spin direction (spin frequency is relatively easy to determine, but we didn’t really care about that much here). In theory, this should be relatively easy: if the sensor is spinning in one direction, the proximity should be increasing (modulo the spike down every now and then), and vice versa for the opposite direction. Indeed, if you plot the readings on the graph, you get something looking like:

Plot of radiometer proximity data

Above: a plot of the radiometer proximity data, taken from the sensor. The first half of the graph shows it spinning in one direction, then it changes at around the 4000 mark on the x-axis to spin in the other direction.

Looking at this (probably somewhat tiny) plot, a pattern is somewhat distinguishable: the troughs of each of the spikes have a different gradient, depending on which direction the radiometer is spinning in - somewhat as we’d expect, from the description we gave above. This reduces the problem to:

  • How do we filter out just the bit at the bottom of each spike that we’re interested in from all the rest of the stuff?
  • How do we figure out the gradient of said interesting points?

The first bullet point was solved with some hacky application of elementary statistics - we kept a buffer of the last 256 data points received from the sensor, calculated their mean and standard deviation, and excluded points that were above mean - (stddev / 4) (with the divisor determined experimentally; 4 seemed to work well enough). This isolated the points to just the few points at the bottom of the trough (or something approximating that).

Radiometer proximity data with gradient lines drawn

For the second, it was time for…some more elementary stats! As might be obvious, a simple linear regression sufficed, giving us a regression line and its gradient to work with. To prevent data points from being calculated from not enough data, we also prevented regressions from being carried out on data with an insufficiently high range, and rejected results with a low R-value. This finally gave us something vaguely capable of determining the spin direction - and we’d also require 3 or so concordant readings in order to actually come to a judgement, to minimize the chance of random errors. (Remember, a false positive would result in the challenge instantly unlocking, so we wanted to be sure; better to have the cracker wait for 5 seconds or so than to have it randomly unlock out of nowhere!)

Oh, and the issues with the user being able to spray frost (or ethanol) at the sensor were resolved by…judicious use of tape covering the sensor, so nothing could get in at it (!). We also added some tissue paper, so that the ethanol wouldn’t run everywhere and ruin the rest of the safe.

Radiometer closeup

Above: If you look closely, you can just about make out the tape on the other side of the radiometer…

What about the rest of the electronics?

This whole thing was done using STM32F103 microcontrollers, actually - the first revision of the safe used a Blue Pill (see linked blog post for more), while the second revision used the somewhat more reliable Maple Mini. The STM32s were definitely a big plus; they were pretty small, for one, which meant we could stuff the electronics underneath the radiometer (an Arduino Uno would definitely not have fit in!), and their speed (and 32-bit address bus…) made doing calculations like the above regression stuff actually feasible. I used the STM32duino IDE and tooling, primarily because writing drivers (for things like the VCNL4010) didn’t seem like my idea of a fun time (!).

Electronics stuffed into the radiometer well

Above: most of the electronics and cabling, stuffed into a well below the radiometer during construction. Connecting all the wires was pretty difficult!

To put things into context, the original revision didn’t use a PCB at all, and instead had tons of flying wires connecting everything on a piece of protoboard; we decided that making a proper PCB for the second revision might be a good idea, so we did:

Artsy shot of the PCB, with theta.eu.org logo in focus

Above: artsy shot of the PCB made for the project, with gratutitous theta.eu.org logo in focus.

In total, the electronics ended up being made up of:

  • 1 Maple Mini STM32F103 microcontroller
  • 1 VCNL4010 proximity sensor breakout
  • 2 AD9833 waveform generator modules, for the frequency challenge
    • The first iteration of the safe initially used digital potentiometers (!) to control some hacked up tone generator kits we had lying around; using the AD9833s was definitely a better idea.
    • However, finding breakout boards with this IC on them wasn’t an easy task9.
  • 1 SparkFun microphone breakout
  • 2 amplifiers, to drive the speakers
  • 2 servo motors, to control release of the marbles
  • 1 Tracopower TSR-2450 voltage regulator, for power input
    • I love these modules (for their efficiency); we used one to step down the 9V barrel jack input to a more manageable 5V.
  • Some well-placed debug (UART) and programming (SWDIO) headers, to get at the Maple Mini
  • Assorted push switches and LEDs for the control panel
  • Probably a few passives and other things like that

Source code

As with the Systems CAT, this code is intended to be looked at, rather than used (if you consider yourself patient enough to spend time untangling it, that is!). However, you might find it interesting:

(All rights reserved, no warranty provided, etc.)

Conclusion

Right, that’s probably about enough information for one blog post; it’s been upwards of 2000 words, after all! Hopefully some of it was vaguely informative / interesting; it’s also worth keeping in mind that the actual process of building this thing was significantly more chaotic and panicked than it might seem (c.f. the footnotes below). However, it was definitely a fun thing to build (despite not doing well in the international finals, sadly!)

Do let me know using the comment thing below if you have any more questions about the safe, or feedback; it’s always nice to receive comments!


  1. (read: a couple of months ago) 

  2. This means your odds of figuring out who the hell I actually am in real life are now about 1 in 5. 

  3. Not my choice of name. 

  4. The explanations behind the radiometer are apparently still disputed; if you do a search for it, there are a number of conflicting reasons why it behaves like this! 

  5. If you’re wondering how that actually worked in practice given differences in phase, etc., you’d be onto something. We may or may not have bodged this part a bit to get it to work… 

  6. It also stopped giving any readings at all if you shone a light at the sensor, which caused some fun issues when it came to cracking the safe. 

  7. When I say ‘we’, I really mean ‘I’, but it sounds nicer this way. 

  8. Transporting the safe to Israel was done, of course, on a plane; the sensor positioning was so sensitive that we thought it’d definitely get screwed up by being bumped around… 

  9. When one of them broke in the week before the international final, trying to hunt around Amazon for another one wasn’t as easy as we would have hoped! 

Jeff Carpenter (jeffcarp)

How are Words Represented in Machine Learning? Part 1 July 13, 2019 01:31 AM

Machine learning on human languages is a super exciting space right now. Applications are exploding—just think of how many natural language ML models it takes to run a smart assistant, from transforming spoken audio to text, to finding the exact part of a web page that answers your question, to choosing the correct words with the correct grammar to reply to you. At work I recently had the opportunity to build an NLP system.

July 12, 2019

Gustaf Erikson (gerikson)

June July 12, 2019 06:57 PM

Kungsträdgården

Stensund

Jun 2018 | Jun 2017 | Jun 2016 | Jun 2015 | Jun 2014 | Jun 2013 | Jun 2012 | Jun 2011 | Jun 2010 | Jun 2009

Leo Tindall (LeoLambda)

What Is Rust's unsafe? July 12, 2019 05:00 PM

I’ve seen a lot of misconceptions around what the unsafe keyword means for the utility and validity of Rust and its marketing as a “safe systems language”. The truth is a lot more complicated than a single pithy tweet can possibly sum up, unfortunately; here it is as I see it. Basically, the unsafe keyword does not turn off the advanced type system that keeps Rust code honest. It only allows a few select “superpowers”, like dereferencing raw pointers.

Jan van den Berg (j11g)

About WordPress, emojis, MySQL and latin1, utf8 and utf8mb4 character sets July 12, 2019 11:55 AM

PSA: the MySQL utf8 character set is not real Unicode utf8. Instead use utf8mb4.

So you landed here because some parts of your website are garbled. And this happened after a server or website migration. You exported your database and imported this export or dump on the new server. And now your posts look like this:

Strange characters!

When they should look like this:

Emojis!

These are screenshots from this website. This website was an old WordPress installation that still used the latin1 character set. The WordPress installation was up to date, but automatic WordPress updates will never update or change the database character set. So this will always remain what it was on initial installation (which was latin1 in my case).

And latin1 can not store (real) Unicode characters e.g. emojis. You need a Unicode character set. So, just use utf8, right?

Problem 1: exporting and importing the database with the wrong character set

When exporting/dumping a database with mysqldump, this will use the default MySQL servers’ character set (set in my.cnf). In my case this was set to utf8. But by not explicitly telling the mysqldump to use the correct character set for a particular database (which was latin1) my dumped data was messed up.

So when I restored the dump (on the new server) some text was garbled and emojis had completely disappeared from blog posts.

I fixed this with help of these links. Key here is: explicitly set the correct character for a database when exporting this database. Then: change all instances in the dump file from the old character set to the new character set and import this file.

Problem 2: some emojis work, some don’t

After this my text was fine. I exported using the old character set and imported using utf8, what could go wrong! But some emojis were still missing, but others were not?! This was a head-scratcher.

There is a question mark instead of an emoji

How can this be, I had set my database character set to utf8 (with utf8_general_ci collation). This is Unicode, right? Wrong!

MySQL utf8 does not support complete Unicode. MySQL utf8 uses only 3 bytes per character.

Full Unicode support needs 4 bytes per character. So your MySQL installation needs to use the utf8mb4 character set (and utf8mb4_unicode_ci collation) to have real and full Unicode support.

Some strange decisions were made in the 2002. 🤯 Which has given a lot of people headaches.

So, MySQL utf8 only uses three bytes per character, but the explanation for the image with the one missing emoji is, that *some* emojis (like the❤) will be expressed correctly with only three bytes.

Problem 3: but I used emojis before, in my latin1 WordPress installation?!

Yes, things worked fine, right? And here is the thing: if you dumped your database with the correct (old) character set and imported correctly with this old character set, things would still work!

But you said latin1 does not support (4 byte character) emojis!?

Correct!

However WordPress is smart enough to figure this out: when using emojis in post titles or posts it will check the database character set and change the emoji to (hexadecimal) HTML code — which can be expressed and stored just fine in latin1.

But how do you explain this image? This was after incorrectly dumping and importing a database.

Wait, what?! The URL has the correct emoji but the post title does not!?!

The post URL has two emojis, but one emoji is missing from the title?! I already explained why the emoji is missing from the post title, so how can that emoji be present in the post URL? This is because WordPress stores the post titles differently from the URLs.

How the title and the post name (url) are stored

So the post title field stores real Unicode symbols and the post_name (URL) field stores them encoded. So there you have it 🤓!

The post About WordPress, emojis, MySQL and latin1, utf8 and utf8mb4 character sets appeared first on Jan van den Berg.

Dan Luu (dl)

Files are fraught with peril July 12, 2019 12:00 AM

This is a psuedo-transcript for a talk given at Deconstruct 2019. To make this accessible for people on slow connections as well as people using screen readers, the slides have been replaced by in-line text (the talk has ~120 slides; at an average of 20 kB per slide, that's 2.4 MB. If you think that's trivial, consider that half of Americans still aren't on broadband and the situation is much worse in developing countries.

Let's talk about files! Most developers seem to think that files are easy. Just for example, let's take a look at the top reddit r/programming comments from when Dropbox announced that they were only going to support ext4 on Linux (the most widely used Linux filesystem). For people not familiar with reddit r/programming, I suspect r/programming is the most widely read English langauge programming forum in the world.

The top comment reads:

I'm a bit confused, why do these applications have to support these file systems directly? Doesn't the kernel itself abstract away from having to know the lower level details of how the files themselves are stored?

The only differences I could possibly see between different file systems are file size limitations and permissions, but aren't most modern file systems about on par with each other?

The #2 comment (and the top replies going two levels down) are:

#2: Why does an application care what the filesystem is?

#2: Shouldn't that be abstracted as far as "normal apps" are concerned by the OS?

Reply: It's a leaky abstraction. I'm willing to bet each different FS has its own bugs and its own FS specific fixes in the dropbox codebase. More FS's means more testing to make sure everything works right . . .

2nd level reply: What are you talking about? This is a dropbox, what the hell does it need from the FS? There are dozenz of fssync tools, data transfer tools, distributed storage software, and everything works fine with inotify. What the hell does not work for dropbox exactly?

another 2nd level reply: Sure, but any bugs resulting from should be fixed in the respective abstraction layer, not by re-implementing the whole stack yourself. You shouldn't re-implement unless you don't get the data you need from the abstraction. . . . DropBox implementing FS-specific workarounds and quirks is way overkill. That's like vim providing keyboard-specific workarounds to avoid faulty keypresses. All abstractions are leaky - but if no one those abstractions, nothing will ever get done (and we'd have billions of "operating systems").

In this talk, we're going to look at how file systems differ from each other and other issues we might encounter when writing to files. We're going to look at the file "stack" starting at the top with the file API, which we'll see is nearly impossible to use correctly and that supporting multiple filesystems without corrupting data is much harder than supporting a single filesystem; move down to the filesystem, which we'll see has serious bugs that cause data loss and data corruption; and then we'll look at disks and see that disks can easily corrupt data at a rate five million times greater than claimed in vendor datasheets.

File API

Writing one file

Let's say we want to write a file safely, so that we don't want to get data corruption. For the purposes of this talk, this means we'd like our write to be "atomic" -- our write should either fully complete, or we should be able to undo the write and end up back where we started. Let's look at an example from Pillai et al., OSDI’14.

We have a file that contains the text a foo and we want to overwrite foo with bar so we end up with a bar. We're going to make a number of simplifications. For example, you should probably think of each character we're writing as a sector on disk (or, if you prefer, you can imagine we're using a hypothetical advanced NVM drive). Don't worry if you don't know what that means, I'm just pointing this out to note that this talk is going to contain many simplifications, which I'm not going to call out because we only have twenty-five minutes and the unsimplified version of this talk would probably take about three hours.

To write, we might use the pwrite syscall. This is a function provided by the operating system to let us interact with the filesystem. Our invocation of this syscall looks like:

pwrite(
  [file], 
  “bar”, // data to write
  3,     // write 3 bytes
  2)     // at offset 2

pwrite takes the file we're going to write, the data we want to write, bar, the number of bytes we want to write, 3, and the offset where we're going to start writing, 2. If you're used to using a high-level language, like Python, you might be used to an interface that looks different, but underneath the hood, when you write to a file, it's eventually going to result in a syscall like this one, which is what will actually write the data into a file.

If we just call pwrite like this, we might succeed and get a bar in the output, or we might end up doing nothing and getting a foo, or we might end up with something in between, like a boo, a bor, etc.

What's happening here is that we might crash or lose power when we write. Since pwrite isn't guaranteed to be atomic, if we crash, we can end up with some fraction of the write completing, causing data corruption. One way to avoid this problem is to store an "undo log" that will let us restore corrupted data. Before we're modify the file, we'll make a copy of the data that's going to be modified (into the undo log), then we'll modify the file as normal, and if nothing goes wrong, we'll delete the undo log.

If we crash while we're writing the undo log, that's fine -- we'll see that the undo log isn't complete and we know that we won't have to restore because we won't have started modifying the file yet. If we crash while we're modifying the file, that's also ok. When we try to restore from the crash, we'll see that the undo log is complete and we can use it to recover from data corruption:

creat(/d/log) // Create undo log
write(/d/log, "2,3,foo", 7) // To undo, at offset 2, write 3 bytes, "foo"
pwrite(/d/orig, “bar", 3, 2) // Modify original file as before
unlink(/d/log) // Delete log file

If we're using ext3 or ext4, widely used Linux filesystems, and we're using the mode data=journal (we'll talk about what these modes mean later), here are some possible outcomes we could get:

d/log: "2,3,f"
d/orig: "a foo"

d/log: ""
d/orig: "a foo"

It's possible we'll crash while the log file write is in progress and we'll have an incomplete log file. In the first case above, we know that the log file isn't complete because the file says we should start at offset 2 and write 3 bytes, but only one byte, f, is specified, so the log file must be incomplete. In the second case above, we can tell the log file is incomplete because the undo log format should start with an offset and a length, but we have neither. Either way, since we know that the log file isn't complete, we know that we don't need to restore.

Another possible outcome is something like:

d/log: "2,3,foo"
d/orig: "a boo"

d/log: "2,3,foo"
d/orig: "a bar"

In the first case, the log file is complete we crashed while writing the file. This is fine, since the log file tells us how to restore to a known good state. In the second case, the write completed, but since the log file hasn't been deleted yet, we'll restore from the log file.

If we're using ext3 or ext4 with data=ordered, we might see something like:

d/log: "2,3,fo"
d/orig: "a boo"

d/log: ""
d/orig: "a bor"

With data=ordered, there's no guarantee that the write to the log file and the pwrite that modifies the original file will execute in program order. Instesad, we could get

creat(/d/log) // Create undo log
pwrite(/d/orig, “bar", 3, 2) // Modify file before writing undo log!
write(/d/log, "2,3,foo", 7) // Write undo log
unlink(/d/log) // Delete log file

To prevent this re-ordering, we can use another syscall, fsync. fsync is a barrier (prevents re-ordering) and it flushes caches (which we'll talk about later).

creat(/d/log)
write(/d/log, “2,3,foo”, 7)
fsync(/d/log) // Add fsync to prevent re-ordering
pwrite(/d/orig, “bar”, 3, 2)
fsync(/d/orig) // Add fsync to prevent re-ordering
unlink(/d/log)

This works with ext3 or ext4, data=ordered, but if we use data=writeback, we might see something like:

d/log: "2,3,WAT"
d/orig: "a boo"

Unfortunately, with data=writeback, the write to the log file isn't guaranteed to be atomic and the filesystem metadata that tracks the file length can get updated before we've finished writing the log file, which will make it look like the log file contains whatever bits happened to be on disk where the log file was created. Since the log file exists, when we try to restore after a crash, we may end up "restoring" random garbage into the original file. To prevent this, we can add a checksum (a way of making sure the file is actually valid) to the log file.

creat(/d/log)
write(/d/log,“…[✓∑],foo”,7) // Add checksum to log file to detect incomplete log file
fsync(/d/log)
pwrite(/d/orig, “bar”, 3, 2)
fsync(/d/orig)
unlink(/d/log)

This should work with data=writeback, but we could still see the following:

d/orig: "a boo"

There's no log file! Although we created a file, wrote to it, and then fsync'd it. Unfortunately, there's no guarantee that the directory will actually store the location of the file if we crash. In order to make sure we can easily find the file when we restore from a crash, we need to fsync the parent of the newly created log.

creat(/d/log)
write(/d/log,“…[✓∑],foo”,7)
fsync(/d/log)
fsync(/d) /// fsync parent directory
pwrite(/d/orig, “bar”, 3, 2)
fsync(/d/orig)
unlink(/d/log)

There are a couple more things we should do. We shoud also fsync after we're done (not shown), and we also need to check for errors. These syscalls can return errors and those errors need to be handled appropriately. There's at least one filesystem issue that makes this very difficult, but since that's not an API usage thing per se, we'll look at this again in the Filesystems section.

We've now seen what we have to do to write a file safely. It might be more complicated than we like, but it seems doable -- if someone asks you to write a file in a self-contained way, like an interview question, and you know the appropriate rules, you can probably do it correctly. But what happens if we have to do this as a day-to-day part of our job, where we'd like to write to files safely every time to write to files in a large codebase.

API in practice

Pillai et al., OSDI’14 looked at a bunch of software that writes to files, including things we'd hope write to files safely, like databases and version control systems: Leveldb, LMDB, GDBM, HSQLDB, Sqlite, PostgreSQL, Git, Mercurial, HDFS, Zookeeper. They then wrote a static analysis tool that can find incorrect usage of the file API, things like incorrectly assuming that operations that aren't atomic are actually atomic, incorrectly assuming that operations that can be re-ordered will execute in program order, etc.

When they did this, they found that every single piece of software they tested except for SQLite in one particular mode had at least one bug. This isn't a knock on the developers of this software or the software -- the programmers who work on things like Leveldb, LBDM, etc., know more about filesystems than the vast majority programmers and the software has more rigorous tests than most software. But they still can't use files safely every time! A natural follow-up to this is the question: why the file API so hard to use that even experts make mistakes?

Concurrent programming is hard

There are a number of reasons for this. If you ask people "what are hard problems in programming?", you'll get answers like distributed systems, concurrent programming, security, aligning things with CSS, dates, etc.

And if we look at what mistakes cause bugs when people do concurrent programming, we see bugs come from things like "incorrectly assuming operations are atomic" and "incorrectly assuming operations will execute in program order". These things that make concurrent programming hard also make writing files safely hard -- we saw examples of both of these kinds of bugs in our first example. More generally, many of the same things that make concurrent programming hard are the same things that make writing to files safely hard, so of course we should expect that writing to files is hard!

Another property writing to files safely shares with concurrent programming is that it's easy to write code that has infrequent, non-deterministc failures. With respect to files, people will sometimes say this makes things easier ("I've never noticed data corruption", "your data is still mostly there most of the time", etc.), but if you want to write files safely because you're working on software that shouldn't corrupt data, this makes things more difficult by making it more difficult to tell if your code is really correct.

API inconsistent

As we saw in our first example, even when using one filesystem, different modes may have significantly different behavior. Large parts of the file API look like this, where behavior varies across filesystems or across different modes of the same filesystem. For example, if we look at mainstream filesystems, appends are atomic, except when using ext3 or ext4 with data=writeback, or ext2 in any mode and directory operations can't be re-ordered w.r.t. any other operations, except on btrfs. In theory, we should all read the POSIX spec carefully and make sure all our code is valid according to POSIX, but if they check filesystem behavior at all, people tend to code to what their filesystem does and not some abtract spec.

If we look at one particular mode of one filesystem (ext4 with data=journal), that seems relatively possible to handle safely, but when writing for a variety of filesystems, especially when handling filesystems that are very different from ext3 and ext4, like btrfs, it becomes very difficult for people to write correct code.

Docs unclear

In our first example, we saw that we can get different behavior from using different data= modes. If we look at the manpage (manual) on what these modes mean in ext3 or ext4, we get:

journal: All data is committed into the journal prior to being written into the main filesystem.

ordered: This is the default mode. All data is forced directly out to the main file system prior to its metadata being committed to the journal.

writeback: Data ordering is not preserved – data may be written into the main filesystem after its metadata has been committed to the journal. This is rumoured to be the highest-throughput option. It guarantees internal filesystem integrity, however it can allow old data to appear in files after a crash and journal recovery.

If you want to know how to use your filesystem safely, and you don't already know what a journaling filesystem is, this definitely isn't going to help you. If you know what a journaling filesystem is, this will give you some hints but it's still not sufficient. It's theoretically possible to figure everything out from reading the source code, but this is pretty impractical for most people who don't already know how the filesystem works.

For English-language documentation, there's lwn.net and the Linux kernel mailing list (LKML). LWN is great, but they can't keep up with everything, so you LKML is the place to go if you want something comprehensive. Here's an example of an exchange on LKML about filesystems:

Dev 1: Personally, I care about metadata consistency, and ext3 documentation suggests that journal protects its integrity. Except that it does not on broken storage devices, and you still need to run fsck there.
Dev 2: as the ext3 authors have stated many times over the years, you still need to run fsck periodically anyway.
Dev 1: Where is that documented?
Dev 2: linux-kernel mailing list archives.
FS dev: Probably from some 6-8 years ago, in e-mail postings that I made.

While the filesystem developers tend to be helpful and they write up informative responses, most people probably don't keep up with the past 6-8 years of LKML.

Performance / correctness conflict

Another issue is that the file API has an inherent conflict between performance and correctness. We noted before that fsync is a barrier (which we can use to enforce ordering) and that it flushes caches. If you've ever worked on the design of a high-performance cache, like a microprocessor cache, you'll probably find the bundling of these two things into a single primitive to be unusual. A reason this is unusual is that flushing caches has a significant performance cost and there are many cases where we want to enforce ordering without paying this performance cost. Bundling these two things into a single primitive forces us to pay the cache flush cost when we only care about ordering.

Chidambaram et al., SOSP’13 looked at the performance cost of this by modifying ext4 to add a barrier mechanism that doesn't flush caches and they found that, if they modified software appropriately and used their barrier operation where a full fsync wasn't necessary, they were able to achieve performance roughly equivalent to ext4 with cache flushing entirely disabled (which is unsafe and can lead to data corruption) without sacrificing safety. However, making your own filesystem and getting it adopted is impractical for most people writing user-level software. Some databases will bypass the filesystem entirely or almost entirely, but this is also impractical for most software.

That's the file API. Now that we've seen that it's extraordinarily difficult to use, let's look at filesystems.

Filesystem

If we want to make sure that filessystems work, one of the most basic tests we could do is to inject errors are the layer below the filesystem to see if the filesystem handles them properly. For example, on a write, we could have the disk fail to write the data and return the appropriate error. If the filesystem drops this error or doesn't handle ths properly, that means we have data loss or data corruption. This is analogous to the kinds of distributed systems faults Kyle Kingsbury talked about in his distributed systems testing talk yesterday (although these kinds of errors are much more straightforward to test).

Prabhakaran et al., SOSP’05 did this and found that, for most filesystems tested, almost all write errors were dropped. The major exception to this was on ReiserFS, which did a pretty good job with all types of errors tested, but ReiserFS isn't really used today for reasons beyond the scope of this talk.

We (Wesley Aptekar-Cassels and I) looked at this again in 2017 and found that things had improved significantly. Most filesystems (other than JFS) could pass these very basic tests on error handling.

Another way to look for errors is to look at filesystems code to see if it handles internal errors correctly. Gunawai et al., FAST’08 did this and found that internal errors were dropped a significant percentage of the time. The technique they used made it difficult to tell if functions that could return many different errors were correctly handling each error, so they also looked at calls to functions that can only return a single error. In those cases, depending on the function, errors were dropped roughly 2/3 to 3/4 of the time, depending on the function.

Wesley and I also looked at this again in 2017 and found significant improvement -- errors for the same functions Gunawi et al. looked at were "only" ignored 1/3 to 2/3 of the time, depending on the function.

Gunawai et al. also looked at comments near these dropped errors and found comments like "Just ignore errors at this point. There is nothing we can do except to try to keep going." (XFS) and "Error, skip block and hope for the best." (ext3).

Now we've seen that while filesystems used to drop even the most basic errors, they now handle then correctly, but there are some code paths where errors can get dropped. For a concrete example of a case where this happens, let's look back at our first example. If we get an error on fsync, unless we have a pretty recent Linux kernel (Q2 2018-ish), there's a pretty good chance that the error will be dropped and it may even get reported to the wrong process!

On recent Linux kernels, there's a good chance the error will be reported (to the correct process, even). Wilcox, PGCon’18 notes that an error on fsync is basically unrecoverable. The details for depending on filesystem -- on XFS and btrfs, modified data that's in the filesystem will get thrown away and there's no way to recover. On ext4, the data isn't thrown away, but it's marked as unmodified, so the filesystem won't try to write it back to disk later, and if there's memory pressure, the data can be thrown out at any time. If you're feeling adventurous, you can try to recover the data before it gets thrown out with various tricks (e.g., by forcing the filesystem to mark it as modified again, or by writing it out to another device, which will force the filesystem to write the data out even though it's marked as unmodified), but there's no guarantee you'll be able to recover the data before it's thrown out. On Linux ZFS, it appears that there's a code path designed to do the right thing, but CPU usage spikes and the system may hang or become unusable.

In general, there isn't a good way to recover from this on Linux. Postgres, MySQL, and MongoDB (widely used databases) will crash themselves and the user is expected to restore from the last checkpoint. Most software will probably just silently lose or corrupt data. And fsync is a relatively good case -- for example, syncfs simply doesn't return errors on Linux at all, leading to silent data loss and data corruption.

BTW, when Craig Ringer first proposed that Postgres should crash on fsync error, the first response on the Postgres dev mailing list was:

Surely you jest . . . If [current behavior of fsync] is actually the case, we need to push back on this kernel brain damage

But after talking through the details, everyone agreed that crashing was the only good option. One of the many unfortunate things is that most disk errors are transient. Since the filesystem discards critical information that's necessary to proceed without data corruption on any error, transient errors that could be retried instead force software to take drastic measures.

And while we've talked about Linux, this isn't unique to Linux. Fsync error handling (and error handling in general) is broken on many different operating systems. At the time Postgres "discovered" the behavior of fsync on Linux, FreeBSD had arguably correct behavior, but OpenBSD and NetBSD behaved the same as Linux (true error status dropped, retrying causes success response, data lost). This has been fixed on OpenBSD and probably some other BSDs, but Linux still basically has the same behavior and you don't have good guarantees that this will work on any random UNIX-like OS.

Now that we've seen that, for many years, filesystems failed to handle errors in some of the most straightforward and simple cases and that there are cases that still aren't handled correctly today, let's look at disks.

Disk

Flushing

We've seen that it's easy to not realize we have to call fsync when we have to call fsync, and that even if we call fsync appropriately, bugs may prevent fsync from actually working. Rajimwale et al., DSN’11 into whether or not disks actually flush when you ask them to flush, assuming everything above the disk works correctly (their paper is actually mostly about something else, they just discuss this briefly at the beginning). Someone from Microsoft anonymously told them "[Some disks] do not allow the file system to force writes to disk properly" and someone from Seagate, a disk manufacturer, told them "[Some disks (though none from us)] do not allow the file system to force writes to disk properly". Bairavasundaram et al., FAST’07 also found the same thing when they looked into disk reliability.

Error rates

We've seen that filessystems sometimes don't handle disk errors correctly. If we want to know how serious this issue is, we should look at the rate at which disks emit errors. Disk datasheets will usually an uncorrectable bit error rate of 1e-14 for consumer HDDs (often called spinning metal or spinning rust disks), 1e-15 for enterprise HDDs, 1e-15 for consumer SSDs, and 1e-16 for enterprise SSDs. This means that, on average, we expect to see one unrecoverable data error every 1e14 bits we read on an HDD.

To get an intuition for what this means in practice, 1TB is now a pretty normal disk size. If we read a full drive once, that's 1e12 bytes, or almost 1e13 bits (technically 8e12 bits), which means we should see, in expectation, one unrecoverable if we buy a 1TB HDD and read the entire disk ten-ish times. Nowadays, we can buy 10TB HDDs, in which case we'd expect to see an error (technically, 8/10th errors) on every read of an entire consumer HDD.

In practice, observed data rates are are significantly higher. Narayanan et al., SYSTOR’16 (Microsoft) observed SSD error rates from 1e-11 to 6e-14, depending on the drive model. Meza et al., SIGMETRICS’15 (FB) observed even worse SSD error rates, 2e-9 to 6e-11 depending on the model of drive. Depending on the type of drive, 2e-9 is 2 gigabits, or 250 MB, 500 thousand to 5 million times worse than stated on datasheets depending on the class of drive.

Bit error rate is arguably a bad metric for disk drives, but this is the metric disk vendors claim, so that's what we have to compare against if we want an apples-to-apples comparison. See Bairavasundaram et al., SIGMETRICS'07, Schroeder et al., FAST'16, and others for other kinds of error rates.

One thing to note is that it's often claimed that SSDs don't have problems with corruption because they use error correcting codes (ECC), which can fix data corruption issues. "Flash banishes the specter of the unrecoverable data error", etc. The thing this misses is that modern high-density flash devices are very unreliable and need ECC to be usable at all. Grupp et al., FAST’12 looked at error rates of the kind of flash the underlies SSDs and found errors rates from 1e-1 to 1e-8. 1e-1 is one error every ten bits, 1e-8 is one error every 100 megabits.

Power loss

Another claim you'll hear is that SSDs are safe against power loss and some types of crashes because they now have "power loss protection" -- there's some mechanism in the SSDs that can hold power for long enough during an outage that the internal SSD cache can be written out safely.

Luke Leighton tested this by buying 6 SSDs that claim to have power loss protection and found that four out of the six models of drive he tested failed (every drive that wasn't an Intel drive). If we look at the details of the tests, when drives fail, it appears to be because they were used in a way that the implementor of power loss protection didn't expect (writing "too fast", although well under the rate at which the drive is capable of writing, or writing "too many" files in parallel). When a drive advertises that it has power loss protection, this appears to mean that someone spent some amount of effort implementing something that will, under some circumstances, prevent data loss or data corruption under power loss. But, as we saw in Kyle's talk yesterday on distributed systems, if you want to make sure that the mechanism actually works, you can't rely on the vendor to do rigorous or perhaps even any semi-serious testing and you have to test it yourself.

Retention

If we look at SSD datasheets, a young-ish drive (one with 90% of its write cycles remaining) will usually be specced to hold data for about ten years after a write. If we look at a worn out drive, one very close to end-of-life, it's specced to retain data for one year to three months, depending on the class of drive. I think people are often surprised to find that it's within spec for a drive to lose data three months after the data is written.

These numbers all come from datasheets and specs, as we've seen, datasheets can be a bit optimistic. On many early SSDs, using up most or all of a drives write cycles would cause the drive to brick itself, so you wouldn't even get the spec'd three month data retention.

Corollaries

Now that we've seen that there are significant problems at every level of the file stack, let's look at a couple things that follow from this.

What to do?

What we should do about this is a big topic, in the time we have left, one thing we can do instead of writing to files is to use databases. If you want something lightweight and simple that you can use in most places you'd use a file, SQLite is pretty good. I'm not saying you should never use files. There is a tradeoff here. But if you have an application where you'd like to reduce the rate of data corruption, considering using a database to store data instead of using files.

FS support

At the start of this talk, we looked at this Dropbox example, where most people thought that there was no reason to remove support for most Linux filesystems because filesystems are all the same. I believe their hand was forced by the way they want to store/use data, which they can only do with ext given how they're doing things (which is arguably a mis-feature), but even if that wasn't the case, perhaps you can see why software that's attempting to sync data to disk reliably and with decent performance might not want to support every single filesystem in the universe for an OS that, for their product, is relatively niche. Maybe it's worth supporting every filesystem for PR reasons and then going through the contortions necessary to avoid data corruption on a per-filesystem basis (you can try coding straight to your reading of the POSIX spec, but as we've seen, that won't save you on Linux), but the PR problem is caused by a misunderstanding.

The other comment we looked at on reddit, and also a common sentiment, is that it's not a program's job to work around bugs in libraries or the OS. But user data gets corrupted regardless of who's "fault" the bug is, and as we've seen, bugs can persist in the filesystem layer for many years. In the case of Linux, most filesystems other than ZFS seem to have decided it's correct behavior to throw away data on fsync error and also not report that the data can't be written (as opposed to FreeBSD or OpenBSD, where most filesystems will at least report an error on subsequent fsyncs if the error isn't resolved). This is arguably a bug and also arguably correct behavior, but either way, if your software doesn't take this into account, you're going to lose or corrupt data. If you want to take the stance that it's not your fault that the filesystem is corrupting data, your users are going to pay the cost for that.

FAQ

While putting this talk to together, I read a bunch of different online discussions about how to write to files safely. For discussions outside of specialized communities (e.g., LKML, the Postgres mailing list, etc.), many people will drop by to say something like "why is everyone making this so complicated? You can do this very easily and completely safely with this one weird trick". Let's look at the most common "one weird trick"s from two thousand internet comments on how to write to disk safely.

Rename

The most frequently mentioned trick is to rename instead of overwriting. If you remember our single-file write example, we made a copy of the data that we wanted to overwrite before modifying the file. The trick here is to do the opposite:

  1. Make a copy of the entire file
  2. Modify the copy
  3. Rename the copy on top of the original file

This trick doesn't work. People seem to think that this is safe becaus the POSIX spec says that rename is atomic, but that only means rename is atomic with respect to normal operation, that doesn't mean it's atomic on crash. This isn't just a theoretical problem; if we look at mainstream Linux filesystems, most have at least one mode where rename isn't atomic on crash. Rename also isn't guaranteed to execute in program order, as people sometimes expect.

The most mainstream exception where rename is atomic on crash is probably btrfs, but even there, it's a bit subtle -- as noted in Bornholt et al., ASPLOS’16, rename is only atomic on crash when renaming to replace an existing file, not when renaming to create a new file. Also, Mohan et al., OSDI’18 found numerous rename atomicity bugs on btrfs, some quite old and some introduced the same year as the paper, so you want not want to rely on this without extensive testing, even if you're writing btrfs specific code.

And even if this worked, the performance of this technique is quite poor.

Append

The second most frequently mentioned trick is to only ever append (instead of sometimes overwriting). This also doesn't work. As noted in Pillai et al., OSDI’14 and Bornholt et al., ASPLOS’16, appends don't guarantee ordering or atomicity and believing that appends are safe is the cause of some bugs.

One weird tricks

We've seen that the most commonly cited simple tricks don't work. Something I find interesting is that, in these discussions, people will drop into a discussion where it's already been explained, often in great detail, why writing to files is harder than someone might naively think, ignore all warnings and explanations and still proceed with their explanation for why it's, in fact, really easy. Even when warned that files are harder than people think, people still think they're easy!

Conclusion

In conclusion, computers don't work (but you probably already know this if you're here at Gary-conf). This talk happened to be about files, but there are many areas we could've looked into where we would've seen similar things.

One thing I'd like to note before we finish is that, IMO, the underlying problem isn't technical. If you look at what huge tech companies do (companies like FB, Amazon, MS, Google, etc.), they often handle writes to disk pretty safely. They'll make sure that they have disks where power loss protection actually work, they'll have patches into the OS and/or other instrumentation to make sure that errors get reported correctly, there will be large distributed storage groups to make sure data is replicated safely, etc. We know how to make this stuff pretty reliable. It's hard, and it takes a lot of time and effort, i.e., a lot of money, but it can be done.

If you ask someone who works on that kind of thing why they spend mind boggling sums of money to ensure (or really, increase the probability of) correctness, you'll often get an answer like "we have a zillion machines and if you do the math on the rate of data corruption, if we didn't do all of this, we'd have data corruption every minute of every day. It would be totally untenable". A huge tech company might have, what, order of ten million machines? The funny thing is, if you do the math for how many consumer machines there are out there and much consumer software runs on unreliable disks, the math is similar. There are many more consumer machines; they're typically operated at much lighter load, but there are enough of them that, if you own a widely used piece of desktop/laptop/workstation software, the math on data corruption is pretty similar. Without "extreme" protections, we should expect to see data corruption all the time.

But if we look at how consumer software works, it's usually quite unsafe with respect to handling data. IMO, the key difference here is that when a huge tech company loses data, whether that's data on who's likely to click on which ads or user emails, the company pays the cost, directly or indirectly and the cost is large enough that it's obviously correct to spend a lot of effort to avoid data loss. But when consumers have data corruption on their own machines, they're mostly not sophisticated enough to know who's at fault, so the company can avoid taking the brunt of the blame. If we have a global optimization function, the math is the same -- of course we should put more effort into protecting data on consumer machines. But if we're a company that's locally optimizing for our own benefit, the math works out differently and maybe it's not worth it to spend a lot of effort on avoiding data corruption.

Yesterday, Ramsey Nasser gave a talk where he made a very compelling case that something was a serious problem, which was followed up by a comment that his proposed solution will have a hard time getting adoption. I agree with both parts -- he discussed an important problem, and it's not clear how solving that problem will make anyone a lot of money, so the problem is likely to go unsolved.

With GDPR, we've seen that regulation can force tech companies to protect people's privacy in a way they're not naturally inclined to do, but regulation is a very big hammer and the unintended consequences can often negate or more than negative the benefits of regulation. When we look at the history of regulations that are designed to force companies to do the right thing, we can see that it's often many years, sometimes decades, before the full impact of the regulation is understood. Designing good regulations is hard, much harder than any of the technical problems we've discussed today.

Acknowledgements

Thanks to Leah Hanson, Gary Bernhardt, Kamal Marhubi, Rebecca Isaacs, Jesse Luehrs, Tom Crayford, Wesley Aptekar-Cassels, Rose Ames, and Benjamin Gilbert for their help with this talk!

Sorry we went so fast. If there's anything you missed you can catch it in the pseudo-transcript at danluu.com/deconstruct-files.

This "transcript" is pretty rough since I wrote it up very quickly this morning before the talk. I'll try to clean it within a few weeks, which will include adding material that was missed, inserting links, fixing typos, adding references that were missed, etc.

Thanks to Anatole Shaw, Jernej Simoncic, @junh1024, and Josh Duff for comments/corrections/discussion on this transcript.

July 11, 2019

Frederic Cambus (fcambus)

Fuzzing DNS zone parsers July 11, 2019 01:00 PM

In my never-ending quest to improve the quality of my C codebases, I've been using AFL to fuzz statzone, the zone parser I use to generate monthly statistics on StatDNS. It helped me to find and fix a NULL pointer dereference.

I initially used the .arpa zone file as input, but then remembered that OpenDNSSEC bundles a special zone for testing purposes, containing a lot of seldom used resource records types, and decided to use this one too.

Out of curiosity, I decided to try fuzzing other DNS zone parsers. I started with validns 0.8, and within seconds the fuzzer found multiple NULL pointer dereferences.

validns

The first occurrence happens in the name2findable_name() function, and can be triggered with the following input:

arpa                    86400   IN      SOA     a.root-servers.net. nstld.verisign-grs.com. 2019021500 1800 900 604800 86400
arpa.                   86400   IN      RRSIG   SOA 8 1 86400 20190228000000 20190214230000 49906 arpa. Qot7qHAA2QhNmAz3oJUIGmxGJrKnWsIzEvZ92R+LV03K7YTFozio2U7Z534RZBhc0UJvlF1YenrbM6ugmF0z55CJD9JY7cFicalFPOkIuWslSl62vuIWHLwN5sA7VZ0ooVN2ptQpPHDa3W/9OPJRF0YqjBBBwD7IiL7V560rbXM=

With the above input, the following call to strlen(3) in rr.c results in a NULL pointer dereference because 's' ends up being NULL:

static unsigned char *name2findable_name(char *s)
{
    int l = strlen(s);

The second occurrence happens in the nsec_validate_pass2() function, and can be triggered with the following input:

arpa.                   86400   IN      SOA     a.root-servers.net. nstld.verisign-grs.com. 2019021500 1800 900 604800 86400
arpa.                   86400   IN      NSEC    a

With the above input, the following call to strcasecmp(3) in rr.c results in a NULL pointer dereference because 'rr->next_domain' ends up being NULL:

if (strcasecmp(rr->next_domain, zone_apex) == 0) {

Given those encouraging results, I went on to fuzz BIND, NSD and Knot zone parsers, using their zone validation tools named-checkzone, nsd-checkzone, and kzonecheck respectively.

While the fuzzers didn't produce any crash for BIND and Knot after running for 3 days and 11 hours, they did produce some valid ones for NSD, and I decided to continue on nsd-checkzone and stop the other fuzzers.

nsd-checkzone

I let AFL complete one cycle, and as I didn't need the box for anything else at this time, I decided to let it run for a few more days. I ended the process after 16 days and 19 hours, completing 2 cycles with 167 unique crashes.

nsd-checkzone

After sorting and analyzing the crashes, I had two valid issues to report.

The first one is an out-of-bounds read caused by improper validation of array index, in the rdata_maximum_wireformat_size() function, in rdata.c.

The second one is a stack-based buffer overflow in the dname_concatenate() function in dname.c, which got assigned CVE-2019-13207.

=================================================================
==7395==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffcd6a9763f at pc 0x0000004dadbc bp 0x7ffcd6a97510 sp 0x7ffcd6a96cc0
WRITE of size 8 at 0x7ffcd6a9763f thread T0
    #0 0x4dadbb in __asan_memcpy (/home/fcambus/nsd/nsd-checkzone+0x4dadbb)
    #1 0x534251 in dname_concatenate /home/fcambus/nsd/dname.c:464:2
    #2 0x69e61f in yyparse /home/fcambus/nsd/./zparser.y:1024:12
    #3 0x689fd1 in zonec_read /home/fcambus/nsd/zonec.c:1623:2
    #4 0x6aedd1 in check_zone /home/fcambus/nsd/nsd-checkzone.c:61:11
    #5 0x6aea07 in main /home/fcambus/nsd/nsd-checkzone.c:127:2
    #6 0x7fa60ece6b96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
    #7 0x41c1d9 in _start (/home/fcambus/nsd/nsd-checkzone+0x41c1d9)

Address 0x7ffcd6a9763f is located in stack of thread T0 at offset 287 in frame
    #0 0x533f8f in dname_concatenate /home/fcambus/nsd/dname.c:458

  This frame has 1 object(s):
    [32, 287) 'temp' (line 459) <== Memory access at offset 287 overflows this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
      (longjmp and C++ exceptions *are* supported)
SUMMARY: AddressSanitizer: stack-buffer-overflow (/home/fcambus/nsd/nsd-checkzone+0x4dadbb) in __asan_memcpy
Shadow bytes around the buggy address:
  0x10001ad4ae70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10001ad4ae80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10001ad4ae90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10001ad4aea0: 00 00 00 00 f1 f1 f1 f1 00 00 00 00 00 00 00 00
  0x10001ad4aeb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x10001ad4aec0: 00 00 00 00 00 00 00[07]f3 f3 f3 f3 f3 f3 f3 f3
  0x10001ad4aed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10001ad4aee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10001ad4aef0: 00 00 00 00 f1 f1 f1 f1 00 00 00 00 00 00 00 00
  0x10001ad4af00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x10001ad4af10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==7395==ABORTING

Both issues have been fixed and will be part of the NSD 4.2.2 release.

I also fuzzed ldns using ldns-read-zone for 12 days and 7 hours but the only crashes it produced were in fact only triggering assertions.

It's been an interesting journey so far, and while finding issues is still relatively easy, time required to sort crashes and distinguish between valid, duplicates, and false positives takes a lot of time. Nonetheless, reading 3rd party source code and analyzing what is going on and why the program crashes is both very instructing and rewarding.

For the time being, I plan to continue fuzzing stuff and will write more about my findings.

July 09, 2019

Mark J. Nelson (mjn)

C.A.R. Hoare's distant connection to the C. Hoare & Co private bank July 09, 2019 12:00 PM

A recent article on UK private bank C. Hoare & Co. made the rounds, noting that it has been run for more than 300 years by the Hoare family. Somehow this found its way into tech discussion boards, which led some people to wonder whether pioneering British computer scientist C.A.R. "Tony" Hoare is from the same Hoare family.

I investigated briefly, and as it turns out the answer appears to be: yes, but with only a distant connection to the branch of the family running the bank, now five generations removed.

The hint for how to track this down came from a mention in the book Reflections on the Work of C.A.R. Hoare (Springer, 2010) that C.A.R. Hoare was of a "somewhat upper-class" background, with a footnote pointing to thepeerage.com, a site primarily focusing on genealogy of the British upper classes.

From this, I was able to trace his family history back along the paternal line as follows:

At this point the genealogy seems to connect to someone who has additional biographical details in Wikipedia, assuming that the Charles James Hoare is the same one described in this article (besides the same name, the approximate timeframe and occupations match). Wikipedia describes Charles James Hoare as "third son of Henry ('Harry') Hoare, banker, of Fleet Street, London, a Partner in C. Hoare & Co", so there's the connection to the banking family, at a great-great-great grandfather (five generations back) distance, around the late 1700s or early 1800s.

The generations after that appear unrelated to banking though: clergyman, clergyman, military officer, colonial civil servant, and finally professor. (The colonial-civil-servant occupation explains the somewhat well known fact that C.A.R. Hoare was born in Sri Lanka.)

Not that this distant connection is necessarily very important biographical information, but there you have it, assuming I haven't made an error in reading the sources, and assuming that they also haven't made any errors.

Tobias Pfeiffer (PragTob)

Looking for a job! July 09, 2019 08:26 AM

NO LONGER (REALLY) LOOKING FOR A JOB I’m no longer looking for a job, or at least not really. I decided to go freelancing for now and I have a project until the end of October if nothing fails. So if you’ve got new freelance projects or you have a really great CTO/VP/Head of position […]

Pete Corey (petecorey)

The Many Ways to Define Verbs in J July 09, 2019 12:00 AM

I recently watched an interesting 12tone video on the “Euler Gradus Suavitatis” which is a formula devised by Leonhard Euler to analytically compute the consonance, or translated literally, the “degree of pleasure” of a given musical interval.

In this formula, n can be any positive integer who’s prime factorization is notated as:

In reality, n can be any ratio of two or more positive integers. Simple intervalic ratios, like an octave, are represented as 1:2, where n would be 2. For more complicated intervals, like a perfect fifth (3:2), we need to find the least common multiple of both components (6) before passing it into our Gradus Suavitatis function. We can even do the same for entire chords, like a major triad (4:5:6), who’s least common multiple would be 60.

The Gradus Suavitatis in J

Intrigued, I decided to implement a version of the Gradus Suavitatis function using the J programming language.

Let’s start by representing our interval as a list of integers. Here we’ll start with a major triad:

   4 5 6
4 5 6

Next, we’ll find the least common multiple of our input:

   *./ 4 5 6
60

From there, we want to find the prime factors of our least common multiple:

   q: *./ 4 5 6
2 2 3 5

Since our primes are duplicated in our factorization, we can assume that their exponent is always 1. So we’ll subtract one from each prime, and multiply by one (which is a no-op):

   _1&+ q: *./ 4 5 6
1 1 2 4

Or even better:

   <: q: *./ 4 5 6
1 1 2 4

Finally, we sum the result and add one:

   1 + +/ <: q: *./ 4 5 6
9

And we get a Gradus Suavitatis, or “degree of pleasure” of 9 for our major triad. Interestingly, a minor triad (10:12:15) gives us the same value:

   1 + +/ <: q: *./ 10 12 15
9

Gradus Suavitatis as a Verb

To make playing with our new Gradus Suavitatis function easier, we should wrap it up into a verb.

One way of building a verb in J is to use the 3 : 0 or verb define construct, which lets us write a multi-line verb that accepts y and optionally x as arguments. We could write our Gradus Suavitatis using verb define:

   egs =. verb define
   1 + +/ <: q: *./ y
   )

While this is all well and good, I wasn’t happy that we needed three lines to represent our simple function as a verb.

Another option is to use caps ([:) to construct a verb train that effectively does the same thing as our simple chained computation:

   egs =. [: 1&+ [: +/ [: <: [: q: *./
   egs 4 5 6
9

However the use of so many caps makes this solution feel like we’re using the wrong tool for the job.

Raul on Twitter showed me the light and made me aware of the single-line verb definition syntax, which I somehow managed to skip over while reading the J Primer and J for C Programmers. Using the verb def construct, we can write our egs verb on a single line using our simple right to left calculation structure:

   egs =. verb def '1 + +/ <: q: *./ y'
   egs 4 5 6
9

It turns out that there’s over two dozen forms of “entity” construction in the J language. I wish I’d known about these sooner.

Regardless of how we define our verb, the Gradus Suavitatis is an interesting function to play with. Using the plot utility, we can plot the “degree of pleasure” of each of the interval ratios from 1:1 to 1:100 and beyond:

July 08, 2019

Derek Jones (derek-jones)

Complexity is a source of income in open source ecosystems July 08, 2019 10:19 PM

I am someone who regularly uses R, and my interest in programming languages means that on a semi-regular basis spend time reading blog posts about the language. Over the last year, or so, I had noticed several patterns of behavior, and after reading a recent blog post things started to make sense (the blog post gets a lot of things wrong, but more of that later).

What are the patterns that have caught my attention?

Some background: Hadley Wickham is the guy behind some very useful R packages. Hadley was an academic, and is now the chief scientist at RStudio, the company behind the R language specific IDE of the same name. As Hadley’s thinking about how to manipulate data has evolved, he has created new packages, and has been very prolific. The term Hadley-verse was coined to describe an approach to data manipulation and program structuring, based around use of packages written by the man.

For the last nine-months I have noticed that the term Tidyverse is being used more regularly to describe what had been the Hadley-verse. And???

Another thing that has become very noticeable, over the last six-months, is the extent to which a wide range of packages now have dependencies on packages in the HadleyTidyverse. And???

A recent post by Norman Matloff complains about the Tidyverse’s complexity (and about the consistency between its packages; which I had always thought was a good design principle), and how RStudio’s promotion of the Tidyverse could result in it becoming the dominant R world view. Matloff has an academic world view and misses what is going on.

RStudio, the company, need to sell their services (their IDE is clunky and will be wiped out if a top of the range product, such as Jetbrains, adds support for R). If R were simple to use, companies would have less need to hire external experts. A widely used complicated library of packages is a god-send for a company looking to sell R services.

I don’t think Hadley Wickam intentionally made things complicated, any more than the creators of the Microsoft server protocols added interdependencies to make life difficult for competitors.

A complex package ecosystem was probably not part of RStudio’s product vision, at least for many years. But sooner or later, RStudio management will have realised that simplicity and ease of use is not in their interest.

Once a collection of complicated packages exist, it is in RStudio’s interest to get as many other packages using them, as quickly as possible. Infect the host quickly, before anybody notices; all the while telling people how much the company is investing in the community that it cares about (making lots of money from).

Having this package ecosystem known as the Hadley-verse gives too much influence to one person, and makes it difficult to fire him later. Rebranding as the Tidyverse solves these problems.

Matloff accuses RStudio of monopoly behavior, I would have said they are fighting for survival (i.e., creating an environment capable of generating the kind of income a VC funded company is expected to make). Having worked in language environments where multiple, and incompatible, package ecosystems existed, I can see advantages in there being a monopoly. Matloff is also upset about a commercial company swooping in to steal their precious, a common academic complaint (academics swooping in to steal ideas from commercially developed software is, of course, perfectly respectable). Matloff also makes claims about teachability of programming that are not derived from any experimental evidence, but then everybody makes claims about programming languages without there being any experimental evidence.

RStudio management rode in on the data science wave, raising money from VCs. The wave is subsiding and they now need to appear to have a viable business (so they can be sold to a bigger fish), which means there has to be a visible market they can sell into. One way to sell in an open source environment is for things to be so complicated, that large companies will pay somebody to handle the complexity.

July 07, 2019

Andrew Owen (yumaikas)

The Kinship of Midnight Travel July 07, 2019 07:25 PM

There is a strange sort of kinship that I feel for other people I see when I’m traveling from one place to another after midnight. Part of it is that travel after everyone is supposed to be in bed means that I see things without all the people around, denuded of the hustle and bustle that often gives a place it’s charm or stress.

But then, spotting that lone trucker at 1 AM, or the lady working the airline check in counter starting at 3:30 AM, or the security guard at 4:30 AM, I know that I’m with people that also know what it means to see a place without it’s crowds. They also have seen the empty streets, the fog, the sprinklers running in the night. They know what it is to wake before the sun, or to chase the hours into the night.

Sometimes this happens in airports, today, it happened as I was driving from Carthage, MO to Joplin, MO. I saw a trucker pulling in off the highway onto one of the commercial districts in Joplin, and was reminded of all the times I’ve traveled in the dark, either at the end of long day in airports, or at the start of a long day on the road. And, I knew, that in some small way, that trucker also knew what it was like to travel at night.

For them, it likely has lost any of the romance or novelty that I still give it. But there’s still something to be said for that most remote of connections, if for no other reason that it’s just me and that other driver on the road, neither of us lost in the crowd, both wanting to reach a destination, yet still travelling in the night.

Thing is, I won’t ever meet that person. But we still shared a space, both members of the midnight traveling clan, a space that most people actively opt-out of. Many interesting bits of life are often found at the edges, where only a few people are paying attention, rather than in the rush of the crowds, where everyone already is, and has already seen them.

Ponylang (SeanTAllen)

Last Week in Pony - July 7, 2019 July 07, 2019 09:29 AM

Last Week In Pony is a weekly blog post to catch you up on the latest news for the Pony programming language. To learn more about Pony check out our website, our Twitter account @ponylang, or our Zulip community.

Got something you think should be featured? There’s a GitHub issue for that! Add a comment to the open “Last Week in Pony” issue.

July 06, 2019

Jeff Carpenter (jeffcarp)

3 Tips for New Technical Interviewers July 06, 2019 10:00 PM

One year ago I conducted my first software engineering interview at Google. In that first interview I gave, I guarantee you I was more nervous than the candidate I was interviewing. Those first few interviews were particularly nerve-wracking. A lot was on the line—I didn’t want to screw up this person’s career by being a bad interviewer! Since then I’ve conducted a great deal more interviews and learned a lot about how to interview candidates successfully.

Ponylang (SeanTAllen)

0.29.0 Released July 06, 2019 02:32 PM

We are proud to announce the release of Pony version 0.29.0. This release includes several improvements to the standard library’s TCP support. This version contains breaking API changes. We recommend upgrading as soon as possible.

Jan van den Berg (j11g)

Thomas Dekker: The Descent (Mijn Gevecht) – Thijs Zonneveld July 06, 2019 08:35 AM

I finished this book in one sitting. Partly because Zonneveld has a pleasant writing style. But also because the rather recent story of a hugely talented and (very) young cyclist who early on in his career got involved with dope and raced towards destruction is fascinating.

Thomas Dekker: The Descent (Mijn Gevecht) – Thijs Zonneveld (2016) – 220 pages

It’s the (auto)biography of Thomas Dekker but it is just as much the biography of the cycling world in the early 2000s. And this world, as we now know, was rotten to the core. This book helped uncover parts of it when it came out in 2016. And many more books about this subject have come out since and around that time.

The book is telling and doesn’t hold back, for anything of anyone. Even Dekker himself doesn’t come across as a particular likeable character. Arrogant, cocky, egotistical and self-destructive to a fault. A very bright star who burned out VERY quickly.

He only did one Tour de France and his actual relevant career was only a few short years. The book came out three years ago, and even this year’s Tour de France will have riders older than Dekker is at the moment. So there is a sense of what could have been.

One important takeaway is the notion that using dope is a gradual (non-conscious) thing. Driven by ego and desire to win. But even more important is the notion that there is no such thing as a casual doper. You either dope or you don’t.

As a cycling enthusiast it’s not necessarily what you want to read. But it is what it is.

The post Thomas Dekker: The Descent (Mijn Gevecht) – Thijs Zonneveld appeared first on Jan van den Berg.

Slaughterhouse Five – Kurt Vonnegut July 06, 2019 08:33 AM

Slaughterhouse Five is a well-known classic. And I had been wanting to read it for quite some time now, and now that I finally did, I must say it was absolutely not what I expected.

In a good way.

Slaughterhouse Five – Kurt Vonnegut (1969) – 220 pages

The book is a sort of autobiographical non-chronological story about the bombing of Dresden, but it is also about time travel, space travel and aliens and different thoughts on philosophy. So yes, there is quite a lot to unpack in this wondrously written meta-fiction novel.

As a reader you have to work hard to keep up with all the time and place switches. But the fantastical and funny storytelling make that easy. (Vonnegut has a certain dry comedic style that I suspect people like Douglas Adams must have been inspired by.)

But even the — sometimes — nihilistic black humour can’t hide that Vonnegut is actually trying to tell or show the reader something. What that is, is largely up to the reader — and also what makes this a postmodern book. One of the things the book itself claims to be, is an anti-war book. And that is certainly also true. So it goes.

The post Slaughterhouse Five – Kurt Vonnegut appeared first on Jan van den Berg.

July 05, 2019

Simon Zelazny (pzel)

Plain text & un*x tools are still the best July 05, 2019 10:00 PM

I did some freelance work for a friendly company that had a problem with its production dataset. They had a Postgres database, running on a 1TB drive, holding historic data in a ~500GB table.

The historic data wasn't needed for anything aside from archival reasons, so the developers had tried — in between feature work — to remove the old rows while maintaining DB uptime. Unfortunately, all their previous attempts at clearing out obsolete data had failed.

Here's what they tried (on clones of the production VM):

  • DELETE FROM samples WHERE timestamp < 1234567890
  • VACUUM
  • VACUUM FULL
  • INSERT INTO new_samples SELECT * FROM samples WHERE (..), followed by TRUNCATE TABLE samples and rename

All of these experiments failed to inspire confidence, as they either filled up the available hard drive space with temporary data, or took so long that the potential downtime was deemed unacceptable (looking at you, VACUUM FULL). In particular, any operation that tried to rebuild the samples table afresh would fail. This was due to the fact that the disk was too small to hold two copies of the table, as dictated by the atomicity requirement of SQL operations.

After some experimentation, we determined that the fastest way to achieve our goals of deleting old data, slimming down index bloat, and reclaiming space for the OS, was to:

  1. export the entire DB to a text file of SQL statements (a.k.a a dump)
  2. remove the existing database completely
  3. clean up the dump in text form
  4. import the trimmed-down dump into a fresh database

This operation proved to be the fastest and most efficient in terms of disk-space required, although we were forced to incur downtime.

The joy of AWK

A concise AWK script was perfect for the job:

1.   BEGIN { in_samples=0; min_ts=1514764800; }
2.   /^\\.$/ { if (in_samples==1) { in_samples=0; } }
3.   // {
4.     if (in_samples) {
5.       if ($1 >= min_ts) { print } else { } ;
6.     } else {
7.       print
8.     }
9.   }
10.  /COPY public.samples/ { in_samples=1; }

In short, the script can be in 2 states:

in_samples := 0 | 1

It starts out in in_samples=0 (1.), and copies over every line of text (7.). If it finds the first line of the beginning of the samples table (10.), it switches to in_samples=1. In this state, it will only copy over samples that are NEWER than January 1, 2018 (5.). If it finds the end-of-table-data marker and is in_samples, it will exit this state (2.). Unless there are two tables in the dump named public.samples (there aren't), the script will never enter in_samples=1 again, and will simply copy over all other rows verbatim (line 7., again).

It's important to note that awk evaluates all clauses in order for every line of input (except for the special BEGIN and END clauses), so some lines of input might 'hit' several awk statements. This is beneficial, but also means that the order of the clauses is important. (Consider what would happen if the clause on line 10. had been placed before the clause on line 3.?)

Summing up

The entire operation of pg_dumpall, awk, and psql import took around 3 hours, which was acceptable downtime for this database as part of scheduled nighttime maintenance. The space taken up by the Postgres data directory went down from ~760GB to ~200GB.

That day I learned that plain text is still the lingua franca of UN*X, and that 40-year-old tools are still excellent at what they were built to do.

Leo Tindall (LeoLambda)

Speedy Desktop Apps With GTK and Rust July 05, 2019 09:00 PM

The web platform is the delivery mechanism of choice for a ton of software these days, either through the web browser itself or through Electron, but that doesn’t mean there isn’t a place for a good old fashioned straight-up desktop application in the picture. Fortunately, it’s easier than ever to write a usable, pretty, and performant desktop app, using my language of choice (Rust) and the wildly successful cross-platform GUI framework GTK.

July 04, 2019

Derek Jones (derek-jones)

How much is a 1-hour investment today worth a year from now? July 04, 2019 02:49 PM

Today, I am thinking of investing 1-hour of effort adding more comments to my code; how much time must this investment save me X-months from now, for today’s 1-hour investment to be worthwhile?

Obviously, I must save at least 1-hour. But, the purpose of making an investment is to receive a greater amount at a later time; ‘paying’ 1-hour to get back 1-hour is a poor investment (unless I have nothing else to do today, and I’m likely to be busy in the coming months).

The usual economic’s based answer is based on compound interest, the technique your bank uses to calculate how much you owe them (or perhaps they owe you), i.e., the expected future value grows exponentially at some interest rate.

Psychologists were surprised to find that people don’t estimate future value the way economists do. Hyperbolic discounting provides a good match to the data from experiments that asked subjects to value future payoffs. The form of the equation used by economists is: e^{-kD}, while hyperbolic discounting has the form 1/{1+kD}, where: k is a constant, and D the period of time.

The simple economic approach does not explicitly include the risk that one of the parties involved may cease to exist. Including risk is non-trivial, banks handle the risk that you might disappear by asking for collateral, or adding something to the interest rate charged.

The fact that humans, and some other animals, have been found to use hyperbolic discounting suggests that evolution has found this approach, to discounting time, increases the likelihood of genes being passed on to the next generation. A bird in the hand is worth two in the bush.

How do software developers discount investment in software engineering projects?

The paper Temporal Discounting in Technical Debt: How do Software Practitioners Discount the Future? describes a study that specifies a decision that has to be made and two options, as follows:

“You are managing an N-years project. You are ahead of schedule in the current iteration. You have to decide between two options on how to spend our upcoming week. Fill in the blank to indicate the least amount of time that would make you prefer Option 2 over Option 1.

  • Option 1: Implement a feature that is in the project backlog, scheduled for the next iteration. (five person days of effort).
  • Option 2: Integrate a new library (five person days effort) that adds no new functionality but has a 60% chance of saving you person days of effort over the duration of the project (with a 40% chance that the library will not result in those savings).

Subjects are then asked six questions, each having the following form (for various time frames):

“For a project time frame of 1 year, what is the smallest number of days that would make you prefer Option 2? ___”

The experiment is run twice, using professional developers from two companies, C1 and C2 (23 and 10 subjects, respectively), and the data is available for download :-)

The following plot shows normalised values given by some of the subjects from company C1, for the various time periods used (y-axis shows PresentValue/FutureValue). On a log scale, values estimated using the economists exponential approach would form a straight line (e.g., close to the first five points of subject M, bottom right), and values estimated using the hyperbolic approach would have the concave form seen for subject C (top middle) (code+data).

Normalised returned required for various elapsed years.

Subject B is asking for less, not more, over a longer time period (several other subjects have the same pattern of response). Why did Subject E (and most of subject G’s responses) not vary with time? Perhaps they were tired and were not willing to think hard about the problem, or perhaps they did not think the answer made much difference. The subjects from company C2 showed a greater amount of variety. Company C1 had some involvement with financial applications, while company C2 was involved in simulations. Did this domain knowledge spill over into company C1’s developers being more likely to give roughly consistent answers?

The experiment was run online, rather than an experimenter being in the room with subjects. It is possible that subjects would have invested more effort if a more formal setting, with an experimenter who had made the effort to be present. Also, if an experimenter had been present, it would have been possible to ask question to clarify any issues.

Both exponential and hyperbolic equations can be fitted to the data, but given the diversity of answers, it is difficult to put any weight in either regression model. Some subjects clearly gave responses fitting a hyperbolic equation, while others gave responses fitted approximately well by either approach, and other subjects used. It was possible to fit the combined data from all of company C1 subjects to a single hyperbolic equation model (the most significant between subject variation was the value of the intercept); no such luck with the data from company C2.

I’m very please to see there has been a replication of this study, but the current version of the paper is a jumble of ideas, and is thin on experimental procedure. I’m sure it will improve.

What do we learn from this study? Perhaps that developers need to learn something about calculating expected future payoffs.

Jeff Carpenter (jeffcarp)

Side Projects July 04, 2019 04:14 AM

Here are some side projects I’ve worked on. Running Pace Calculator 2019 Is an offline-enabled running pace calculator. I wrote about how I made it here. → Visit pacing.run JLPT Daily N3 Twitter bot 2013 to present I run a Twitter bot that tweets vocabulary for the JLPT N3 test. → Visit @jlptdaily3 on Twitter rcrd 2011 to present rcrd is my barebones app for doing quantified-self. → Visit rcrd.

July 02, 2019

Bogdan Popa (bogdan)

Announcing racket-sentry July 02, 2019 07:00 AM

I just released the first version of sentry, a Racket library that lets you capture exceptions using the Sentry API.

Jeff Carpenter (jeffcarp)

Brief: Machine Learning July 02, 2019 04:14 AM

Briefs are where I try condensing my notes on a subject into a concise list of explanations (Feyman Technique-style), each fewer than 250 words. 🚨 This brief is a work in progress. 🚨 Overview Types of Prediction Classification and regression. Types of Models Supervised is where you know all the labels beforehand, unsupervised is where you don’t. Semi-supervised is where you know some. Evaluating Models precision, recall NLP Text preprocessing: GloVe, BERT Binary classification A classification task where the model has 2 choices.

Andreas Zwinkau (qznc)

Definitions of Software Architecture July 02, 2019 12:00 AM

Software architecture documents the shared understanding of a software system.

Read full article!

July 01, 2019

Stig Brautaset (stig)

Speeding up tests on CircleCI for a Python Django project July 01, 2019 05:49 PM

I outline how we reduced the time to run a Django application's CI test suite from about 13 minutes to under 4 minutes.

Kevin Burke (kb)

Using a Bernzomatic TS8000 Kitchen Torch to Sear Meat July 01, 2019 04:43 AM

For a company trying to sell and explain a product, a lot of this information was amazingly difficult to find so I wrote this.

For the rest of this we'll assume you're cooking a steak but the same advice applies to most other meats/cuts.

Why Do You Want A Torch to Sear Meat

To get a good texture on a steak, you need to get the outside of the steak really hot. If you keep the steak on the heat source for too long, though, it will start to cook the inside too much, and you'll end up with a well done steak.

A torch helps with this problem by making it possible to sear the outside of the steak very quickly which means the outside can get a nice sear while the inside stays at medium rare.

What to Get

You want a torch that can char the surface of a steak in under a minute. Most of the torches can't actually get that hot!

I used a butane crème brûlée torch for years and would get frustrated my steaks didn't look like the ones on Serious Eats, unless I torched them for 5-6 minutes. The torch wasn't hot enough. Using a cast iron pan by itself, even on the highest setting is also not likely to get hot enough. If your steak does not look as black as the one in Serious Eats the outside is not getting hot enough.

I read the Wirecutter guide and got the Bernzomatic TS8000. This torch is definitely hot enough; with propane it's about 3x as hot as my crème brûlée torch. I used it tonight to sear a tri tip. It is so much better.

Is that it? (No, you need fuel)

No. The Bernzomatic torch is just a torch; you still need to buy fuel. The TS8000 works with either butane or propane. You don't want butane though, you want propane, because it gets 50% hotter than butane does.

Every photo on the Internet of both the TS8000 and propane fuel tanks either shows them connected or with the caps on. You need a propane tank that can connect to this, on the bottom of the TS8000.

The top of your propane canister should look like this, with the cap off:

Bernzomatic sells a compatible 14 oz propane tank. You can also use the green Coleman camping gas tanks. The advantage of the Coleman tank is it is cheap and your grocery/hardware store probably has them. It weighs 16 ounces full, which sounds heavy but, it's fine, it's what I used.

I tried to find a small tank - 5-10 ounces - but didn't find anything promising. You'll probably get something in the 12-16 ounce range. Just put it on the end of the torch and sear with the whole tank on the end of the torch.

Summary

  • Get the Bernzomatic TS8000, it is one of the few torches that gets hot enough

  • Use propane, not butane, with it

  • Coleman camping gas tanks are fine

That's it. Here is the manual for the TS8000. Happy searing!

Pete Corey (petecorey)

Building My Own Spacemacs July 01, 2019 12:00 AM

I switched from Vim to Spacemacs several years ago and have never looked back. At first, Spacemacs offered an amazingly polished and complete editing environment that I immediately fell in love with. But over the years my Spacemacs environment began to fall apart. My config grew stale, I found myself having to write clumsy custom layers to incorporate unsupported Emacs packages, and the background radiation of bugs and glitches started to become unignorable.

At the suggestion of Andrew on Twitter, I decided to ditch Spacemacs completely and roll my own Emacs configuration.

I couldn’t be happier with the result!

Before we dive into some of the details, here’s my entire Emacs init.el file at the time of posting this article. My current configuration is modeled after the tiny subset of Spacemacs I found myself using on a day-to-day basis. You can get an idea of my typical workflow by looking at my General key mappings.

I work on several projects throughout the day, so I use Projectile to manage operations within each of those projects. I use Perspective to isolate buffers between those projects. The projectile-persp-switch-project package acts as a nice bridge between the two projects:

"p" 'projectile-command-map
"pp" 'projectile-persp-switch-project
"pf" 'counsel-projectile

Spacemacs has the home-grown concept of a “layout” that lets you switch between numbered workspaces. I was worried I would miss this feature, as a common workflow for me was to hit SPC p l <workspace number> to switch between various workspaces. Thankfully, I’ve found that projectile-persp-switch-project works just as well if not better that Spacemac’s “layout” system. I hit SPC p p and then type one or two characters to narrow down onto the project I want. I no longer have to remember workspace numbers, and I don’t need to worry about exceeding ten workspaces.

When I’m not using counsel-projectile to find files within a project, I use deer to navigate directories:

"a" '(:ignore t :which-key "Applications")
"ar" 'ranger
"ad" 'deer

The only other external application I use is a terminal, but I use it so frequently I’ve mapped it to SPC ', just like Spacemacs:

"'" 'multi-term

I’ve also included a few cosmetic customizations to build out a distraction-free code editing environment.

As you can see, I have a fairly simple workflow, and my custom Emacs configuration reflects that simplicity. Not only is my configuration simpler, but I’ve noticed that startup times have reduced significantly, and the number of in-editor bugs and glitches I’ve encountered as dropped to zero. I’ve also developed a much deeper understanding of Emacs itself and the Emacs ecosystem.

I couldn’t be happier!

Andreas Zwinkau (qznc)

One Letter Programming Languages July 01, 2019 12:00 AM

If you are looking for a free name, there is none.

Read full article!

Robin Schroer (sulami)

My Thoughts on spec July 01, 2019 12:00 AM

In the beginning of this year I started a new job, and I am now fortunate enough to be writing Clojure full-time. I believe that Clojure is a very well crafted language and enables developers like few others, but I have also some grievances to report. I want to prefix this by saying that I love Clojure despite its faults, and this is more of a constructive criticism piece than anything else.

What Is spec?

What I want to discuss today is clojure.spec, the gradual typing solution shipped with Clojure since 1.9.I am of course aware that spec is officially in alpha, and that there is a second alpha version which might address some of my points here. But as I have not tried alpha 2 yet, and a lot of people are using alpha 1, we will be mostly looking at alpha 1 in this post. Still, feel free to contact me and tell me how alpha 2 will improve things.

In case you are not familiar with spec, here is a quick run-down of how it works:

Clojure is a dynamically typed language, meaning no (or very few) type annotations in the source code, unlike say Haskell or most curl-brace languages. While this removes a lot of visual clutter, and the language is designed in such a way that most functions can operate on most types for maximum flexibility, this also means that sometimes things break at run time due to type errors. To make dealing with types easier, spec allows you to place type definitions in strategic places, say at entry- and exit points of an app or a module to ensure that your idea of structure and type of your data still lines up with reality.

The benefit of this approach over static typing is that you can encapsulate parts of your software with type safety without having to deal with the encumbrance in the internals. You define a contract to the outside, but stay flexible in your implementation.

In practice it looks like this:

(ns spec-demo
  (:require [clojure.spec.alpha :as s]
            [clojure.spec.gen.alpha :as gen]))
            
(s/def ::name string?)
(s/def ::age pos-int?)

(s/def ::person
  (s/keys :req-un [::name
                   ::age]))
                
(s/valid? ::person
          {:name "John"
           :age 35})
;; => true

(s/explain ::person
           {:name 35})
;; => 35 - failed: string? in: [name] at: [:name] spec: :spec-demo/name
;;    {:name 35} - failed: (contains? % :age) spec: :spec-demo/person

(gen/generate
  (s/gen ::person))
;; => {:name "3o311eSXA4Dm9cLkENIt3J5Gb", :age 3318}

This snippet demonstrates creating a spec person which has sub-specs name and age, validates some data against the spec, explains why some data does not validate, and also generates some random data compliant with the spec. The last bit is incredibly useful for testing, as you can imagine. There are some more advanced features than what I have shown here, like specs for in- and output types of functions, but this is the gist of it.

So What Is Wrong with spec?

specs as Macros

spec is implemented using macros, which are compile-time expansions, think template meta-programming in C++. This brings some limitations, which are especially noticeable in Clojure where, like in most other Lisps, code is treated as data and many parts of your software are introspectable at run time. specs are, once defined, not easy to extract data from.

Contrast this with schema, a Clojure library which does essentially the same thing, and got started before spec. schema “schemas” are just hash maps, which are easily constructed and introspected at run time. schema has other problems, but the community seems to generally flock to spec as the officially blessed version.

The situation I have now repeatedly seen is one where I have a spec defining a specific representation of an entity, where for example one spec extends another one. A practical example here is when augmenting an entity with additional data, so that the output representation is a modified version of the input type.

You can define a spec as a superset of another spec, and only in that direction, using s/or, but at that point you are already stuck in the spec an cannot get the list of fields easily out of your specs. It would be nice to be able to define one spec in terms of flat data, which would also be available at run time. This would be incredibly helpful for use with something like select-keys.

There are ways of working around this, for example by using eval, or wrapping your spec macros in another layer of macros, but both of these are more hacks than solutions. spec alpha 2 promises to solve this particular problem by separating symbolic specs from spec objects, the former being just flat data which can also be generated programmatically.

If spec alpha 2 works out the way I hope, this will be solved soon, and the documentation looks promising so far.

Namespacing

spec is very intent on you using namespaces for your keywords, which in general is a good idea. Many developers, including myself, have been using the namespace of a key to give context about its use, for example :person/name is a different spec from :company/name. The problem with this is the overloading of namespaced keywords, which are an existing Clojure feature. These namespaces person and company do not actually exist, and I do not want to clutter all my keys with a long prefix like :myapp.entities.person/name to make it match a real namespace.

Now namespaced keywords can have any namespace, and the namespace does not actually have to exist for everything related to keywords to work well, except when it does. If you want to use a spec from another namespace, but alias that namespace to shorten it, you need to require that namespace, for which it needs to exist. As a workaround to this I have created “fake” namespaces in the past using a helper.

This actually leads me to another problem, which is the question of where to place specs in the first place. spec comes with a global, centralised registry for specs, which in alpha 1 you cannot opt out of. In theory this allows you to define/register your spec once in any place you like, and then access it from anywhere without having to know where it comes from or requiring it. This, while having the potential for being too opaque and magical, is actually a very good feature in my opinion. It is trivial to override specs when reloading them, and I have not experienced any issues with evaluation order yet. Due to specs referring to other specs by their name, you can define dependencies of a spec after the fact, and the system will pick them up accordingly.

My current solution for this is having a file & namespace for every entity which can be required and aliased normally, the only problem with this being relationships between specs. As soon as one spec needs to include another spec, dependencies get muddled, so I have experimented with having a special namespace for specs which are shared across many other specs, but this is far from ideal. I wish there was a cleaner way to do this, especially leveraging the global registry.

Function spec Definitions

I mentioned above that spec also allows instrumenting functions, but the semantics for this are a bit wonky in my opinion. See for yourself:

(defn double [x]
  (* 2 x))
  
(s/fdef double
  :args (s/cat :x int?)
  :ret int?
  :fn #(= (:ret %)
          (-> % :args :x (* 2))))

This naive spec restricts in- and output types to integers, which is okay in this case. The :fn key describes the relationship between in- and output, and is in this case actually absurdly strict, but this is just an example. There are two issues I have with this:

First the :fn definition tends to be very elaborate and hard to understand at a glance. Even in this simple case, there is a lot going on in there, and the use of anonymous functions does not help it. In practice, this key is optional and I omit it almost always, because I cannot think of any formal assertions I want to make about the output which are also reasonably simple to encode. And if you want to make several separate assertions about the output, you almost cannot avoid breaking up the spec into pieces, at which point you have predicates which exist purely for spec.

The other issue I have is that this is decoupled from the actual function definition. In theory you can place the spec in a different namespace and refer to the function using its fully qualified name, and this is tempting, especially when looking at my previous point about these specs having the potential to be far more longer than the actual function definitions. But then your function exists conceptually in two places, and these two places have to be kept in sync. If you move, rename, or modify the function in almost any way, you have to modify the spec, too, but first you have to find the spec.

The problem I can see with this is that :fn can be used to make assertions about the functions, which can in almost all cases be made in unit tests as well. In fact, unit tests are meant for exactly this, asserting single assumptions about units at a time. The condition above could just as well be a test called the result equals twice the input value. Functions are usually only instrumented locally and/or during testing, as they incur non-negligible overhead, and I would argue that they do not provide significant assurances over unit tests.

I would much rather drop the :fn key, and include type declarations in the actual function definition, which is incidentally how schema works:

(s/defn double :- s/Int
  [x :- s/Int]
  (* 2 x))

In the end, I am looking forward to the next iteration of spec, and hope it addresses as many issues as possible, and I am already blown away by how much better it is than alternatives available in other languages.

June 30, 2019

Derek Jones (derek-jones)

Medieval guilds: a tax collection bureaucracy June 30, 2019 11:45 PM

The medieval guild is sometimes held up as the template for an institution dedicated to maintaining high standards, and training the next generation of craftsmen.

“The European Guilds: An economic analysis” by Sheilagh Ogilvie takes a chainsaw (i.e., lots of data) to all the positive things that have been said about medieval guilds (apart from them being a money making machine for those on the inside).

Guilds manipulated markets (e.g., drove down the cost of input items they needed, and kept the prices they charged high), had little or no interest in quality, charged apprentices for what little training they received, restricted entry to their profession (based on the number of guild masters the local population could support in a manner expected by masters), and did not hesitate to use force to enforce the rules of the guild (should a member appear to threaten the livelihood of other guild members).

Guild wars is not the fiction of an online game, guilds did go to war with each other.

Given their focus on maximizing income, rather than providing customer benefits, why did guilds survive for so many centuries? Guilds paid out significant sums to influence those in power, i.e., bribes. Guilds paid annual sums for the exclusive rights to ply their trade in geographical areas; it’s all down on Vellum.

Guilds provided the bureaucracy needed to collect money from the populace, i.e., they were effectively tax collectors. Medieval rulers had a high turn-over, and most were not around long enough to establish a civil service. In later centuries, the growth of a country’s population led to the creation of government departments, that were stable enough to perform tax collecting duties more efficiently that guilds; it was the spread of governments capable of doing their own tax collecting that killed off guilds.

June 29, 2019

jfb (jfb)

Movie Review - A Simple Favor (2019) June 29, 2019 12:00 AM

Untitled code{white-space: pre-wrap;} span.smallcaps{font-variant: small-caps;} span.underline{text-decoration: underline;} div.column{display: inline-block; vertical-align: top; width: 50%;}

I didn’t really know anything about this, going in. It was tonally all over the map; the performances were good, it was competently put together. The pure aesthetics of watching Henry Golding and Blake Lively interact was pretty great; but at the end of the day, the plot was stupid, the pacing weird, and the tone veered too radically between farce and thriller to work for me.

Any time you “solve” the big mystery with a twin, you have failed, screenwriter. Sure, that might be the plot failing of the source material, but it’s still dumb of hell.

I give this movie two stars: one for the aforementioned aesthetics, and one for Anna Kendrick’s teeth.

Rating: ★ ★

June 28, 2019

Grzegorz Antoniak (dark_grimoire)

Enabling lldb on Mojave June 28, 2019 06:00 AM

I have an impression that each version of macOS system introduces additional obstacles for developer. From one point of view I understand that they just want to make the system more secure, and that should be the biggest priority above all (well, maybe not above common sense). But it's hard …

June 27, 2019

Gonçalo Valério (dethos)

8 useful dev dependencies for django projects June 27, 2019 11:15 PM

In this post I’m gonna list some very useful tools I often use when developing a Django project. These packages help me improve the development speed, write better code and also find/debug problems faster.

So lets start:

Black

This one is to avoid useless discussions about preferences and taste related to code formatting. Now I just simply install black and let it care of these matters, it doesn’t have any configurations (with one or two exceptions) and if your code does not have any syntax errors it will be automatically formatted according to a “style” that is reasonable.

Note: Many editors can be configured to automatically run black on every file save.

https://github.com/python/black

PyLint

Using a code linter (a kind of static analysis tool) is also very easy, can be integrated with your editor and allows you to catch many issues without even running your code, such as, missing imports, unused variables, missing parenthesis and other programming errors, etc. There are a few other In this case pylint does the job well and I never bothered to switch.

https://www.pylint.org/

Pytest

Python has a unit testing framework included in its standard library (unittest) that works great, however I found out that there is an external package that makes me more productive and my tests much more clear.

That package is pytest and once you learn the concepts it is a joy to work with. A nice extra is that it recognizes your older unittest tests and is able to execute them anyway, so no need to refactor the test suite to start using it.

https://docs.pytest.org/en/latest/

Pytest-django

This package, as the name indicates, adds the required support and some useful utilities to test your Django projects using pytest. With it instead of python manage.py test, you will execute just pytest like any other python project.

https://pytest-django.readthedocs.io

Django-debug-toolbar

Debug toolbar is a web panel added to your pages that lets you inspect your requests content, database queries, template generation, etc. It provides lots of useful information in order for the viewer to understand how the whole page rendering is behaving.

It can also be extended with other plugin that provide more specific information such as flamegraphs, HTML validators and other profilers.

https://django-debug-toolbar.readthedocs.io

Django-silk

If you are developing an API without any HTML pages rendered by Django, django-debug-toobar won’t provide much help, this is where django-silk shines in my humble opinion, it provides many of the same metrics and information on a separate page that can be inspected to debug problems and find performance bottlenecks.

https://github.com/jazzband/django-silk

Django-extensions

This package is kind of a collection of small scripts that provide common functionality that is frequently needed. It contains a set of management commands, such as shell_plus and runserver_plus that are improved versions of the default ones, database visualization tools, debugger tags for the templates, abstract model classes, etc.

https://django-extensions.readthedocs.io

Django-mail-panel

Finally, this one is an email panel for the django-debug-toolbar, that lets you inspect the sent emails while developing your website/webapp, this way you don’t have to configure another service to catch the emails or even read the messages on terminal with django.core.mail.backends.console.EmailBackend, which is not very useful if you are working with HTML templates.

https://github.com/scuml/django-mail-panel

Awn Umar (awn)

mutable strings in go June 27, 2019 12:00 AM

According to the Go language specification:

Strings are immutable: once created, it is impossible to change the contents of a string.

We’ll see about that.

data := []byte("yellow submarine")
str := *(*string)(unsafe.Pointer(&data))

for i := range data {
    data[i] = 'x'
    fmt.Printf("%s\n%s\n\n", string(data), str)
}

Try it out yourself.

June 24, 2019

Simon Zelazny (pzel)

Ruby apps under runit: notes to self June 24, 2019 10:00 PM

I was recently migrating nearly ten-year-old Ruby software from an Ubuntu 10:04 VM to something more recent. It took me way longer than I expected, due to two major snags.

Snag one: Bundler and per-user runsvdir

The apps in their original Lucid Lynx setting were run from a bash script that resembled this:

while (/bin/true); do
  (bundle exec 'thin -C config_foo.yml -R config.ru start' 2>&1)
  echo "Server died at `date`. Respawning.." >&2
  sleep 2
done

The app stayed up for years, so I guess this unsophisticated approach was good enough. For ease of deployment (and improved logging, see point #2 below), I decided to move all ruby web apps on this VPS to a user-local runit supervision tree.

In practice, this means that there's a /etc/service/webuser-sv directory, containing a runit service which launches a nested runsvdir program as the webuser-sv user.

$ cat /etc/service/webuser-sv
#!/bin/sh
exec 2>&1
exec chpst -uwebuser runsvdir /home/webuser/service

Now, I can define all of webuser's ruby apps as entries under /home/webuser/service/*, and have them supervised without bash hackery.

The snag was that the apps would crash with this error, but only when run as part of the runit supervision tree:

2019-06-25_11:27:37.51241 `/` is not writable.
2019-06-25_11:27:37.51244 Bundler will use `/tmp/bundler/home/unknown' as your home directory temporarily.

But if I ran the runit run scripts by hand, the apps started up and worked correctly.

After a lot of false starts and pointless github issue rabbit-holes, I realized that the runsvdir process managing the user-local supervision tree was launched with chpst by the master runit supervisor. Specifying a UID for chpst with -u does not automatically mean that the profile for this user gets loaded. In particular, not even $HOME was configured in the runit supervisor environment.

Bundler needs '$HOME' to be set, otherwise it gets confused.

Hence, my runit run files now look like this:

exec 2>&1
export RUBYOPT=-W0
export HOME=/home/web
export LANG=en_US.UTF-8
cd /home/web/webapp.1.com
exec bundle exec rackup -E production -s thin -o 0.0.0.0 -p 4567 config.ru

Duplication could be further removed by setting up an envdir, and running the user-level runsvdir with this envdir passed to chpst ... but the above solution is good enough for today.

Snag two: Don't ever redirect $stdin and $stderr

Runit has a wonderfully clean approach to logging, predating the 12-factor app, but very similar in spirit. Run your app in the foreground, and pipe its output to your log processor. This way applications themselves never need to concern themselves with logging frameworks and infrastructure.

Now ten years ago, when I launched these apps from bash scripts, they apps themselves definitely needed to know what to do with their logs. Hence, these two monstrous lines:

settings.configure do |s|
    $stdout = File.open("log/log_out.txt","a")
    $stderr = File.open("log/log_err.txt","a")

These two lines had me stumped as to why an app was silently crashing at startup with nothing in the runit logs. After removing the two lines I was able to see the real reason for the crash (some missing assets).

Sure, the exception had been logged all along to log/log_err.txt, but I'd completely forgotten to look there, expecting the logging to be handled by runit's log facility.

Never redirect $stdout and $stdin inside the app, kids. Your future self will thank you.

Chris Allen (bitemyapp)

Abusing instance local functional dependencies for nicer number literals June 24, 2019 12:00 AM

I demonstrated how to make kilobyte/megabyte constants in Haskell and a Lobsters user asked how it worked. This is a bloggification of my reply to them explaining the trick.

June 23, 2019

Ponylang (SeanTAllen)

Last Week in Pony - June 23, 2019 June 23, 2019 06:06 PM

Last Week In Pony is a weekly blog post to catch you up on the latest news for the Pony programming language. To learn more about Pony check out our website, our Twitter account @ponylang, or our Zulip community.

Got something you think should be featured? There’s a GitHub issue for that! Add a comment to the open “Last Week in Pony” issue.

Chris Double (doublec)

Getting Started with Mercury June 23, 2019 11:00 AM

Mercury is a logic programming language, similar to Prolog, but with static types. It feels like a combination of SML and Prolog at times. It was designed to help with programming large systems - that is large programs, large teams and better reliability, etc. The commercial product Prince XML is written in Mercury.

I've played around with Mercury in the past but haven't done anything substantial with it. Recently I picked it up again. This post is a short introduction to building Mercury, and some example "Hello World" style programs to test the install.

Build

Mercury is written in the Mercury language itself. This means it needs a Mercury compiler to bootstrap from. The way I got a build going from source was to download the source for a release of the day version, build that, then use that build to build the Mercury source from github. The steps are outlined in the README.bootstrap file, but the following commands are the basic steps:

$ wget http://dl.mercurylang.org/rotd/mercury-srcdist-rotd-2019-06-22.tar.gz
$ tar xvf mercury-srcdist-rotd-2019-06-22.tar.gz
$ cd mercury-srcdist-rotd-2019-06-22
$ ./configure --enable-minimal-install --prefix=/tmp/mercury
$ make
$ make install
$ cd ..
$ export PATH=/tmp/mercury/bin:$PATH

With this minimal compiler the main source can be built. Mercury has a number of backends, called 'grades' in the documentation. Each of these grades makes a number of tradeoffs in terms of generated code. They define the platform (C, assembler, Java, etc), whether GC is used, what type of threading model is available (if any), etc. The Adventures in Mercury blog has an article on some of the different grades. Building all of them can take a long time - multiple hours - so it pays to limit it if you don't need some of the backends.

For my purposes I didn't need the CSharp backend, but wanted to explore the others. I was ok with the time tradeoff of building the system. To build from the master branch of the github repository I did the following steps:

$ git clone https://github.com/Mercury-Language/mercury
$ cd mercury
$ ./prepare.sh
$ ./configure --enable-nogc-grades --disable-csharp-grade \
              --prefix=/home/myuser/mercury
$ make PARALLEL=-j4
$ make install PARALLEL=-j4
$ export PATH=/home/myuser/mercury/bin:$PATH

Change the prefix to where you want Mercury installed. Add the relevant directories to the PATH as specified by the end of the build process.

Hello World

A basic "Hello World" program in Mercury looks like the following:

:- module hello.

:- interface.
:- import_module io.
:- pred main(io, io).
:- mode main(di, uo) is det.

:- implementation.
main(IO0, IO1) :-
    io.write_string("Hello World!\n", IO0, IO1).

With this code in a hello.m file, it can be built and run with:

$ mmc --make hello
Making Mercury/int3s/hello.int3
Making Mercury/ints/hello.int
Making Mercury/cs/hello.c
Making Mercury/os/hello.o
Making hello    
$ ./hello
Hello World!

The first line defines the name of the module:

:- module hello.

Following that is the definitions of the public interface of the module:

:- interface.
:- import_module io.
:- pred main(io, io).
:- mode main(di, uo) is det.

We publically import the io module, as we use io definitions in the main predicate. This is followed by a declaration of the interface of main - like C this is the user function called by the runtime to execute the program. The definition here declares that main is a predicate, it takes two arguments, of type io. This is a special type that represents the "state of the world" and is how I/O is handled in Mercury. The first argument is the "input world state" and the second argument is the "output world state". All I/O functions take these two arguments - the state of the world before the function and the state of the world after.

The mode line declares aspects of a predicate related to the logic programming side of things. In this case we declare that the two arguments passed to main have the "destructive input" mode and the "unique output" mode respectively. These modes operate similar to how linear types work in other languages, and the reference manual has a section describing them. For now the details can be ignored. The is det portion identifies the function as being deterministic. It always succeeds, doesn't backtrack and only has one result.

The remaining code is the implementation. In this case it's just the implementation of the main function:

main(IO0, IO1) :-
    io.write_string("Hello World!\n", IO0, IO1).

The two arguments to main, are the io types representing the before and after representation of the world. We call write_string to display a string, passing it the input world state, IO0 and receiving the new world state in IO1. If we wanted to call an additional output function we'd need to thread these variables, passing the obtained output state as the input to the new function, and receiving a new output state. For example:

main(IO0, IO1) :-
    io.write_string("Hello World!\n", IO0, IO1),
    io.write_string("Hello Again!\n", IO1, IO2).

This state threading can be tedious, especially when refactoring - the need to renumber or rename variables is a pain point. Mercury has syntactic sugar for this called state variables, enabling this function to be written like this:

main(!IO) :-
    io.write_string("Hello World!\n", !IO),
    io.write_string("Hello Again!\n", !IO).

When the compiler sees !Variable_name in an argument list it creates two arguments with automatically generated names as needed.

Another syntactic short cut can be done in the pred and mode lines. They can be combined into one line that looks like:

:- pred main(io::di, io::uo) is det.

Here the modes di and uo are appended to the type prefixed with a ::. The resulting program looks like:

- module hello.

:- interface.
:- import_module io.
:- pred main(io::di, io::uo) is det.

:- implementation.
main(!IO) :-
    io.write_string("Hello World!\n", !IO),
    io.write_string("Hello Again!\n", !IO).

Factorial

The following is an implementation of factorial:

:- module fact.

:- interface.
:- import_module io.
:- pred main(io::di, io::uo) is det.

:- implementation.
:- import_module int.
:- pred fact(int::in, int::out) is det.

fact(N, X) :-
  ( N = 1 -> X = 1 ; fact(N - 1, X0), X = N * X0 ).

main(!IO) :-
  fact(5, X),
  io.print("fact(5, ", !IO),
  io.print(X, !IO),
  io.print(")\n", !IO).

In the implementation section here we import the int module to access functions across machine integers. The fact predicate is declared to take two arguments, both of type int, the first an input argument and the second an output argument.

The definition of fact uses Prolog syntax for an if/then statement. It states that if N is 1 then (the -> token) the output variable, X is 1. Otherwise (the ; token), calculate the factorial recursively using an intermediate variable X0 to hold the temporary result.

There's a few other ways this could be written. Instead of the Prolog style if/then, we can use an if/then syntax that Mercury has:

fact(N, X) :-
  ( if N = 1 then X = 1 else fact(N - 1, X0), X = N * X0 ).

Instead of using predicates we can declare fact to be a function. A function has no output variables, instead it returns a result just like functions in standard functional programming languages. The changes for this are to declare it as a function:

:- func fact(int) = int.
fact(N) = X :-
  ( if N = 1 then X = 1 else X = N * fact(N - 1) ).

main(!IO) :-
  io.print("fact(5, ", !IO),
  io.print(fact(5), !IO),
  io.print(")\n", !IO)

Notice now that the call to fact looks like a standard function call and is inlined into the print call in main. A final syntactic shortening of function implementations enables removing the X return variable name and returning directly:

fact(N) = (if N = 1 then 1 else N * fact(N - 1)).

Because this implementation uses machine integers it won't work for values that can overflow. Mercury comes with an arbitrary precision integer module, integer, that allows larger factorials. Replacing the use of the int module with integer and converting the static integer numbers is all that is needed:

:- module fact.

:- interface.
:- import_module io.
:- pred main(io::di, io::uo) is det.

:- implementation.
:- import_module integer.
:- func fact(integer) = integer.

fact(N) = (if N = one then one else N * fact(N - one)).

main(!IO) :-
  io.print("fact(1000, ", !IO),
  io.print(fact(integer(1000)), !IO),
  io.print(")\n", !IO).

Conclusion

There's a lot more to Mercury. These are just first steps to test the system works. I'll write more about it in later posts. Some further reading:

Bit Cannon (wezm)

Announcing Desktop Institute June 23, 2019 05:42 AM

Since publishing, A Tiling Desktop Environment, I’ve continued to think about the topic, absorb the comments I received, try out some of the suggestions, and poke around the code bases of some existing window managers and Wayland compositors.

This weekend I set up a new website to document the thinking and research I’ve been doing. It’s called Desktop Institute, and has a fun domain: desktop.institute. Check it out for a more info on what I have planned as well as a roadmap for future posts.

June 21, 2019

Indrek Lasn (indreklasn)

How To Fetch Data From An API With React Hooks June 21, 2019 01:18 PM

React hooks let us write pure functional components without ever using the class syntax. Usually, but not always, the less code we have to…

June 20, 2019

Indrek Lasn (indreklasn)

How did you get this title formatting? June 20, 2019 06:59 PM

How did you get this title formatting?

How To Use Redux with React Hooks June 20, 2019 10:44 AM

React Redux released hooks with the 7.1.0 version. This means we get to use the latest best practices with React.

Marc Brooker (mjb)

When Redundancy Actually Helps June 20, 2019 12:00 AM

When Redundancy Actually Helps

Redundancy can harm more than it helps.

Just after I joined the EBS team at AWS in 2011, the service suffered a major disruption lasting more than two days to full recovery. Recently, on Twitter, Andrew Certain said:

We were super dependent on having a highly available network to make the replication work, so having two NICs and a second network fabric seemed to be a way to improve availability. But the lesson of this event is that only some forms of redundancy improve availability.

I've been thinking about the second part of that a lot recently, as my team starts building a new replicated system. When does redundancy actually help availability? I've been breaking that down into four rules:

  1. The complexity added by introducing redundancy mustn't cost more availability than it adds.
  2. The system must be able to run in degraded mode.
  3. The system must reliably detect which of the redundant components are healthy and which are unhealthy.
  4. The system must be able to return to fully redundant mode.

This might seem like obvious, even tautological, but each serves as the trigger of deeper thinking and conversation.

Don't add more risk than you take away

Andrew (or Kerry Lee, I'm not sure which) introduced this to the EBS team as don't be weird.

This isn't a comment on people (who are more than welcome to be weird), but on systems. Weirdness and complexity add risk, both risk that we don't understand the system that we're building, and risk that we don't understand the system that we are operating. When adding redundancy to a system, it's easy to fall into the mistake of adding too much complexity, and underestimating the ways in which that complexity adds risk.

You must be able to run in degraded mode

Once you've failed over to the redundant component, are you sure it's going to be able to take the load? Even in one of the simplest cases, active-passive database failover, this is a complex question. You're going from warm caches and full buffers to cold caches and empty buffers. Performance can differ significantly.

As systems get larger and more complex, the problem gets more difficult. What components do you expect to fail? How many at a time? How much traffic can each component handle? How do we stop our cost reduction and efficiency efforts from taking away the capacity needed to handle failures? How do we continuously test that the failover works? What mechanism do we have to make sure there's enough failover capacity? There's typically at least as much investment in answering these questions as building the redundant system in the first place.

Chaos testing, gamedays, and other similar approaches are very useful here, but typically can't test the biggest failure cases in a continuous way.

You've got to fail over in the right direction

When systems suffer partial failure, it's often hard to tell what's healthy and what's unhealthy. In fact, different systems in different parts of the network often completely disagree on health. If your system sees partial failure and fails over towards the truly unhealthy side, you're in trouble. The complexity here comes from the distributed systems fog of war: telling the difference between bad networks, bad software, bad disks, and bad NICs can be surprisingly hard. Often, systems flap a bit before falling over.

The system must be able to return to fully redundant mode

If your redundancy is a single shot, it's not going to add much availability in the long term. So you need to make sure the system can safely get from one to two, or N to N+1, or N to 2N. This is relatively easy in some kinds of systems, but anything with a non-zero RPO or asynchronous replication or periodic backups can make it extremely difficult. In small systems, human judgement can help. In larger systems, you need an automated plan. Most likely, you're going to make a better automated plan during daylight in the middle of the week during your design phase than at 3AM on a Saturday while trying to fix the outage.

June 19, 2019

Derek Jones (derek-jones)

A zero-knowledge proofs workshop June 19, 2019 11:33 PM

I was at the Zero-Knowledge proofs workshop run by BinaryDistict on Monday and Tuesday. The workshop runs all week, but is mostly hacking for the remaining days (hacking would be interesting if I had a problem to code, more about this at the end).

Zero-knowledge proofs allow person A to convince person B, that A knows the value of x, without revealing the value of x. There are two kinds of zero-knowledge proofs: an interacting proof system involves a sequence of messages being exchanged between the two parties, and in non-interactive systems (the primary focus of the workshop), there is no interaction.

The example usually given, of a zero-knowledge proof, involves Peggy and Victor. Peggy wants to convince Victor that she knows how to unlock the door dividing a looping path through a tunnel in a cave.

The ‘proof’ involves Peggy walking, unseen by Victor, down path A or B (see diagram below; image from Wikipedia). Once Peggy is out of view, Victor randomly shouts out A or B; Peggy then has to walk out of the tunnel using the path Victor shouted; there is a 50% chance that Peggy happened to choose the path selected by Victor. The proof is iterative; at the end of each iteration, Victor’s uncertainty of Peggy’s claim of being able to open the door is reduced by 50%. Victor has to iterate until he is sufficiently satisfied that Peggy knows how to open the door.

Alibaba example cave loop.

As the name suggests, non-interactive proofs do not involve any message passing; in the common reference string model, a string of symbols, generated by person making the claim of knowledge, is encoded in such a way that it can be used by third-parties to verify the claim of knowledge. At the workshop we got an overview of zk-SNARKs (zero-knowledge succinct non-interactive argument of knowledge).

The ‘succinct’ component of zk-SNARK is what has made this approach practical. When non-interactive proofs were first proposed, the arguments of knowledge contained around one-terabyte of data; these days common reference strings are around a kilobyte.

The fact that zero-knowledge ‘proof’s are possible is very interesting, but do they have practical uses?

The hackathon aspect of the workshop was designed to address the practical use issue. The existing zero-knowledge proofs tend to involve the use of prime numbers, or the factors of very large numbers (as might be expected of a proof system that is heavily based on cryptographic techniques). Making use of zero-knowledge proofs requires mapping the problem to a form that has a known solution; this is very hard. Existing applications involve cryptography and block-chains (Zcash is a cryptocurrency that has an option that provides privacy via zero-knowledge proofs), both heavy users of number theory.

The workshop introduced us to two languages, which could be used for writing zero-knowledge applications; ZoKrates and snarky. The weekend before the workshop, I tried to install both languages: ZoKrates installed quickly and painlessly, while I could not get snarky installed (I was told that the first two hours of the snarky workshop were spent getting installs to work); I also noticed that ZoKrates had greater presence than snarky on the web, in the form of pages discussing the language. It seemed to me that ZoKrates was the market leader. The workshop presenters included people involved with both languages; Jacob Eberhardt (one of the people behind ZoKrates) gave a great presentation, and had good slides. Team ZoKrates is clearly the one to watch.

As an experienced hack attendee, I was ready with an interesting problem to solve. After I explained the problem to those opting to use ZoKrates, somebody suggested that oblivious transfer could be used to solve my problem (and indeed, 1-out-of-n oblivious transfer does offer the required functionality).

My problem was: Let’s say I have three software products, the customer has a copy of all three products, and is willing to pay the license fee to use one of these products. However, the customer does not want me to know which of the three products they are using. How can I send them a product specific license key, without knowing which product they are going to use? Oblivious transfer involves a sequence of message exchanges (each exchange involves three messages, one for each product) with the final exchange requiring that I send three messages, each containing a separate product key (one for each product); the customer can only successfully decode the product-specific message they had selected earlier in the process (decoding the other two messages produces random characters, i.e., no product key).

Like most hackathons, problem ideas were somewhat contrived (a few people wanted to delve further into the technical details). I could not find an interesting team to join, and left them to it for the rest of the week.

There were 50-60 people on the first day, and 30-40 on the second. Many of the people I spoke to were recent graduates, and half of the speakers were doing or had just completed PhDs; the field is completely new. If zero-knowledge proofs take off, decisions made over the next year or two by the people at this workshop will impact the path the field follows. Otherwise, nothing happens, and a bunch of people will have interesting memories about stuff they dabbled in, when young.

Indrek Lasn (indreklasn)

I reckon those are for testing purposes only. June 19, 2019 05:43 PM

I reckon those are for testing purposes only. Check out the Move programming section to programmatically handle transactions:

https://developers.libra.org/docs/move-overview#move-transaction-scripts-enable-programmable-transactions

I’m Giving Out My Best Business Ideas Hoping Someone Will Build Them — Part II June 19, 2019 02:36 PM

This is a series where I post ideas and problems I want to see solved. The ideas range from tech to anything. The main idea behind the…

June 18, 2019

Indrek Lasn (indreklasn)

Thanks, Madhu. I love exploring and writing about cool new tech. Keep on rockin’ June 18, 2019 08:55 PM

Thanks, Madhu. I love exploring and writing about cool new tech. Keep on rockin’