This site will look much better in a browser that supports webstandards , but it is accessible to any browser or Internet device.

Professional Hardwarenear Programming

Networking

© 2011 Peter Thoemmes, 2011-10-13

IPv6

IP version 6 (IPv6) is a new version of the Internet Protocol, designed as the successor to IP version 4 (IPv4). IPv6 primarily shows up with following changes (RFC 2460):

Expanded Addressing Capabilities

IPv6 increases the IP address size from 32 bits to 128 bits, to support more levels of addressing hierarchy, a much greater number of addressable nodes, and simpler auto-configuration of addresses. And a new type of address called an 'anycast address' is defined, used to send a packet to any one of a group of nodes.

Header Format Simplification

Some IPv4 header fields have been dropped or made optional, to reduce the common-case processing cost of packet handling and to limit the bandwidth cost of the IPv6 header.

Improved Support for Extensions and Options

Changes in the way IP header options are encoded allows for more efficient forwarding, less stringent limits on the length of options, and greater flexibility for introducing new options in the future.

Flow Labeling Capability

A new capability is added to enable the labeling of packets belonging to particular traffic "flows" for which the sender requests special handling, such as non-default quality of service or 'real-time' service.

Authentication and Privacy Capabilities

Extensions to support authentication, data integrity, and (optional) data confidentiality are specified for IPv6.

Please find a research paper of Peter Thoemmes as a quick guide through the IPv6 world here:

A research paper of Peter Thoemmes about IPv6

Encryption

General

In the 1990's Ethernet arrived on the scene fro industrial devices in automation and control. Mini computer manufacturers more and more came under pressure to not just support serial link interfaces for their hardware (TTY-24mA, RS232, RS422 and RS485-Bus), but also fieldbus systems (PROFIBUS, Interbus, Modbus, etc.). While the fieldbus systems could still be handled quite well (special ASICs have been developed to easily interface those with existing circuits), the computer networks like IBM's Token Ring and Ethernet arrived a bit to fast and to dynamic. Microsoft enforced its way into networking with Windows 3.11 beginning of the 90's, and so the door was open for endless things to be developed, not just at Universities but in private homes. Linux started its way to become a serious thread to the Microsoft platform on i386 systems. Almost at the same time the world wide web started and the push that was given to IP (IPv4) networking was so big that hardware developers had real problems to cope with the rapid development. As time was short, more and more hardware simply came with embedded systems running Microsoft Windows. Beside that, solutions to get the good old serial link connected to Ethernet had been setup using print-servers and port-servers, acting as gateway to existing hardware with serial line interface. And last, but not least, network protocols to transport multimedia (audio, video and data) came up, like for example ATM. While TCP on top of IPv4 finally became more and more popular, there were other things popping up, like Microsoft's Net-BIOS, Net-DDE, DCOM (ORPC on top of DCE-RPC on top of TCP/IP) and SMB on top of NetBIOS. The big mistake of that time was, that the main focus of the development labs was on solutions for connectivity, but only rarely on security. Since the year 2000, when the famous Y2K problem was faced, developers started heavily to think about security. During the first decade security raised up to become one of the most important things in computing. Going for SSH, SFTP and SCP rather then RSH, FTP and RCP is nowadays a normal thing, while in the beginning it was a real challenge. Today, banks typically offer online banking via SSL to everyone. Encrypted connections like SSH tunnels, SSL and VPN connections are now building the standard way do get interconnected worldwide.

Key Exchange Before Encryption

While on the one hand developers and researchers heavily developed on secure ways to encrypt messages, there was still the problem how the encryption keys could be exchanged without people physically moving to the other end of the connection (meaning to the server). The basic solution to this problem was already discovered in 1976, the so-called Diffie-Hellmann Key Agreement Method. Using that method the two endpoints exchange their public keys over an unencrypted line. Then both build the secret key for that connection. User A builds that key from his private and user B's public key, and user B builds the same key from his private and user A's public key. So the actual secret key is never transmitted, and it can only be built by user A and B. That makes the secret key unique to the connection between user A and user B. The protocols SSH, SFTP and SCP use this method to agree to a secret key. How this is possible explains following paper:

A research paper of Peter Thoemmes about the Diffie-Hellmann Key Agreement Method

The Diffie-Hellmann Key Agreement Method still faces the 'man-in- the-middle' attack problem. To overcome this, the concept of Public Key Infrastructures (PKI) using SSL Public Key Certificates was born. Using that method a server provides its public key, as it did for the Diffie-Hellmann method, but it does so in a signed certificate. A client (e.g. a web-browser) then verifies the signature of that certificate by the signature decryption key of the signer (issuer), who is called Certificate Authority (CA) in a PKI. The signature decryption key is another public key and is not to mess up with the signed public key inside the certificate. The issuer (CA) did encrypt the signature with its private key of an asymmetric key pair. The client can decrypted the signature with the public key of that asymmetric key pair. That's the way an asymmetric key pair works. So a client simply needs to know the public key of the issuer (CA) to be able to decrypt it. To enable all clients to do so, the deployment of the public (signature decryption) key of the root instance of a PKI is done during the installation of the encryption software (e.g. OpenSSL or Firefox web-browser). For the Internet community that root instance is the IPRA (Internet Policy Registration Authority). A client can now verify the server's public key by a certificate, signed by a CA. It can further verify the CA's public (signature decryption) key by the CA's certificate and so on. That loop goes up until a certificate is self-signed, meaning the issuer (signer) is at the same time the holder of the public key inside. Then the root CA is reached, e.g. the IPRA. That's the way SSL and HTTPS clients are verifying public keys provided by a server. How this is done in detail, explains following paper:

A research paper of Peter Thoemmes about SSL Public Key Certificates

Like the Diffie-Hellmann method, also this method has some weakness. Although it overcomes the 'man-in-the-middle' attack problem, it faces a new problem: The whole story is built on trust! To show what this means, please read the following example. A hacker could fake a certificate, meaning he creates a certificate that maps the FQDN (e.g. www.paypal.com) of a well known high value HTTPS web-site to a public key of his web-site. If this guy can manage to get access to the private (signature encryption) key of a trusted CA, he can sign that certificate, and so become ready to play the 'man-in-the-middle' game. That's because he is then enabled to hook into an SSL connection request and provide the faked certificate. The requesting client will then trust the certificate and the hacker can attack him. So the safest method to trust a public key of a server (by the opinion of the author of this article) is to contact the server's administrator by phone and ask him for the fingerprint of the server's public key. To get the MD5 fingerprint from a provided certificate, assuming it is stored in the file server.pem, following OpenSSL command has to be executed:

$ openssl x509 -md5 -noout -fingerprint -in server.pem

Just to be complete, here a remark on another problem using SSL. The world of the virtual hosts is becoming more complicated with SSL public key certificates. The problem is, that originally the SSL protocol did not foresee to handle the name of the contacted server. That means, after an DNS server responded to the client with a valid IP address to the server's name, the client connected to that IP address, but the name of the server (fully qualified domain name, FQDN) was never forwarded to the actual server. That's understandable, as the FQDN was originally made to lookup the IP address and that's it. So first there was the DNS lookup, second there was an IP addressing, third there was the SSL handshake and encryption-setup and last the actual HTTP protocol was started. The funny thing with HTTP is, that this protocol again handles the server's name. The HTTP protocol headers do contain that name (FQDN), and so a web-server can have a look at that and dispatch a request to different virtual hosts. It is just a matter of configuration of the web-server. That all works fine, as long as all virtual hosts communicate over an insecure (plaint text) channel, typically using port 80. But as soon as the virtual hosts want to use SSL, they need to provide a valid SSL public key certificates. Each virtual host has a unique FQDN, and so each needs an own SSL public key certificate to link that FQDN to its public key. But at the time, that certificate needs to be provided (right after the client established the connection to the IP address of the server), the targeted server name is not known. And so the server machine can not provide the correct certificate if there is more then one. To solve that problem, later versions of SSL are extended by the so-called Server Name Indication (SNI). So the newer SSL protocols forsee that a client can sent the server's name right before the actual SSL handshake starts. As long as this is not supported by all web-browsers, it makes sense to workaround that problem, as shown in this document:

A research paper of Peter Thoemmes about SSL and Name-Based Virtual Hosts

Persistent Key Exchange for Automation

For any kind of automation it might be useful to persistently store the own public key to the remote user's authorized keys database. OpenSSH defines the user's 'authorized_keys' file as being such a place to store public keys. So a user who likes to get connected without providing a password needs to put the own public key there. To be able to put the own key into a remote user's 'authorized_keys' file one needs the remote user's password. For normal remote accounts the key transfer is possible by following bash shell command line:

$ cat ~/.ssh/*.pub | ssh user@host "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; cat >> ~/.ssh/authorized_keys"

For remote accounts in a 'chroot' environment, like for scponly accounts, the root password is required to do so. This is because the remote 'chroot' user is not allowed to do changes on his/her environment. Here is a script from Peter Thoemmes that supports you sending your local key to the right place in the remote 'scponly' user's 'chroot' environment:

Script of Peter Thoemmes to put a local public SSH (RSA) key to a remote scponly account

Having Public Key Certificates using a Merkle Tree

A completely different to validate public key certificates is to use one single and public Merkle tree. How this might work is described in this paper:

A research paper of Peter Thoemmes about the Idea to have Public Key Certificates using a Merkle Tree