My little mail server

What a pile of poo! I have access to four smtp servers and they all have limits on the distribution list size! (They are all about the same size of 100). Google will allow 2000 per list but only if you use one of their clients. Let’s try and bring up an SMTP server, connect it to DNS, and then send them myself. Sounds like a job for AWS. Continue reading “My little mail server”

Ruggedising the Internet

Mike Masnick writes a little article forecasting the engineers re-writing the single points of failure out of the internet. He entitles his article, Building A More Decentralized Internet: It’s Happening Faster Than People Realize. He cross references to two articles written by himself back in 2010, Operation Payback And Wikileaks Show The Battle Lines Are About Distributed & Open vs. Centralized & Closed and The Revolution Will Be Distributed: Wikileaks, Anonymous And How Little The Old Guard Realizes What’s Going On in which he, more accurately, recognises the current and future power of distributed and private networks. It should be remembered that these predictions all occurred before the Arab spring and the recent protests in Turkey and the state responses to the use of networks.

Masnick predicts that the judicial and informal non-judicial attacks on certain sites on the internet will lead to an engineering response and that the single points of failure will be remediated. He points at an article in the New Yorker, The Mission to Decentralize the Internet, which discusses the barriers to mass adoption of superior distributed solutions and some of the ideological history.

One of the responses to today’s challenges is at this manifesto for an Internet for the 21st Century, which is hosted at wauland.de, with the hashtag #ybti, an interesting identification of the inadequacies of even the best today. I also need to check out the key note proceedings of the 30C3, the Chaos Computer Club; not sure if any of these act as an alternative manifesto. The manifesto calls for,

Our concept for a new Internet is based on the following design principles:

• ubiquitous end-to-end encryption, removing the necessity to trust any third parties that might access our data while it is being transmitted or stored
• obfuscation of transmission patterns, preventing the analysis of social relations, behavior patterns and topical interests of the participants in a network
• decentralized authentication mechanisms, removing the necessity to trust centralized certification authorities that can be compromised
• multicast technology, because we need to interconnect billions of users without the need for centralized server farms
• distributed data flow and storage, making bulk collection of data economically unattractive
• consistent use of free and open software, putting the system under permanent public scrutiny and giving users control over their computation

The comments in Mike’s article are gratefully short of the usually bile about piracy and at least one contributor points at DNS as one of the choke points. A contributor called ninja says,

One of the next steps on the Internet that must take priority is the development of a decentralized DNS system that can be trusted. And encrypted. There are many developments in the DNS field such as the recent DNSSEC and that OpenDNS initiative to encrypt DNS queries (I’m using it but I honestly don’t know how to check if it works!). Then bittorrent will evolve into a huge cloud hdd making it virtually impossible to take down files from that big cloud. I’m guessing tor may evolve into something that will be used everyday too to ensure privacy and anonymity.

and so adds a storage medium to the list of SPOFs.

One of the replies to the comment about DNS points at Zooko’s triangle. I documented my researches on P2P DNS at this article on this wiki which like the New Yorker article point at Bitcoin’s name services, Namecoin.

Interesting initiatives obviously include TOR and the EFF pointed me at the Tahoe-FS, which has its home here…. The pirate browser and Diaspora suggest with TOR that peer-to-peer is the way to go but the stranglehold that the ISPs have on connectivity in the US and Europe will remain a choke point. Another initiative I discovered while writing this article is Project Meshnet. We or maybe our municipalities will need to build peer to peer connectivity, which may work well and easily in the towns, but will be harder to build in rural areas. DIY is hard since the use of the radio spectrum is highly regulated but I know that the anti-HADOPI campaigners and some US municipalities have considered building mesh networks from wifi or wifi max appliances; in the UK this is currently frowned on by the ISPs and inhibited by the Digital Economy Act although this is struggling to become Law. (I need to remember the story about someone switching their hub OS where they had originally used Linux because the radio spectrum regulator didn’t want the radio ASIC device driver source published because it allowed an illegal and unlicensed use of the spectrum).

While tidying up the office, I came across a ghard copy of this, “Decentralized Infrastructurefor Wikileaks”, which has some good ideas.

My personal experiences recently are firstly in moving into a flat in London, where I was legally able to piggy back of my neighbours connections using BT WiFi and alternatively, the difficulties friends living in more rural areas have found in getting connected. At the moment only massive multi-national corporations can afford the cable or satellite networks that alllow the internet’s connectivity but it’s possible the entry point is coming down, shown the way by Facebook’s purchase of Ascenta. to begin to execute on the vision expressed in this white paper by Mark Zuckerberg.

When will they give up with the Digital Economy Act? (It’s coming up to it’s 4th anniversary and they still have no time table for its implementation.)

ooOOOoo

Bruce Schneier points to Whatsapp’s adoption of end to end encryption for all content. The comments are as ever worth reading and don’t degenerate into foolish argument. I like, “Encryption is a honeypot”, encrypted broadcasting kills the usefulness of meta data and the idea of running Whatsapp over TOR. DFL 9 Apr 2016

I have installed the Related Articles plugin and between me and it, the following links might be useful.

P2P DNS

Looking at DNS and the attempt to P2P it.

Peter Sunde launched a project, reported at Computer World in an article called “P2P DNS to take on ICANN after US domain seizures”

It seems to have got stuck. This article dated 18 Oct 2011 and called Continuing the Distributed DNS System on Slashdot has some pointers. See also P2P-DNS taking control of the Internet  at memeburn.com.

The nearest successor seems to be namecoin, see http://namecoin.info/ , http://dot-bit.org/Main_Page & its wikipedia page

While researching this I came across a page on alternate roots at Wikipedia.

ddclient

ddclient is a program for Linux that negotiates with http://dyndns.com to allow systems with dynamic tcp/ip addresses to have static DNS names.

Originally written in 2011, and revisited this in 2013

2013

I made a new Amazon machine. Don’t muck around, install ddclient using apt-get. The post install script now takes you through the critical configuration questions and writes a .conf file.

If you make the AMI first, you have the AWS public IP address from the public ip name. The curl command below also still works

  1. Make a dns name at dyndns.org, use the AMI public ip address
  2. Install ddclient using apt-get
  3. Choose www.dyndns.com
  4. Provide your login/password
  5. Select checkip.dyndns.com
  6. Use select from list
  7. Select your domain name
  8. Wait for the name to propagate <- most imortant

2011

I decided to use ddclient to keep a constant name on my Amazon EC2 Machine. This was an Ubuntu image. I therefore have the choice of using the package manager or a tarball install, although the latter is not automated.

Obtaining the Package from dyndns

I originally downloaded it from dyndns.com and installed it on my, now defunct, Cobalt Qube.

This section was written in 2011 when I replaced my Cobalt Qube with an AMI.

I used wget to pull down http://cdn.dyndns.com/ddclient.tar.gz. This was version 3.7.3. It has a README which documents how to install the client. I used update-rd.d to install the start/stop script which I called rc.ddclient. I used the debian sample as my model. This required the addition of the LSB Init Headers. I added the login credentials to the appropriate configuration file.

  • The guys at ddclient say I didn’t look hard enough for an LSB compliant script.
  • The default configuration file is held at /etc/ddclient/ddclient.conf.

ddclient needs a directory, /var/cache/ddclient, it doesn’t make it in the initialisation scripts, so this needs to be done by hand.

I have an Ubuntu image so I needed to install the PERL SSL libraries.

sudo apt-get install libio-socket-ssl-perl

other distros may have this already, and using the package manager does this for you.

N.B. ddclient tries to mail its messages. My server doesn’t have sendmail configured. This seems to be configurable using parameters in the config file. <say more here>

Using an Ubuntu package

ddclient is available as a package and can be installed the usual way

apt-get install ddclient

It doesn’t have an EC2 option and the post install script is interactive. I chose the simplest I could and then edited the config files by hand.

It locates stuff in different places as Ubuntu usually does.

It has a script conf file in /etc/default/ddclient, this is a shell and impacts the initscript. It has three switches which determine if it runs as a daemon and how frequently it polls.

The default conf file is /etc/ddclient.conf not /etc/ddclient/ddclient.conf. I have linked this location to my conf file in /etc/ddclient. So I now meet both standards, and I hope can use apt-get to keep the package up to date.

Discovering the instance public address

In addition to the instructions in the ddclient README, I needed to look at the instructions documented at Amazon EC2 – What You May Not Have Known a blog article at codesta.com. This details the magic runes required to make ddclient work for an amazon ec2 instance. It clearly needs the public ip address and uses the ddclient -cmd tool to get it.

use=cmd, cmd='curl http://169.254.169.254/2007-08-29//meta-data/public-ipv4'
protocol=dyndns2
server=members.dyndns.org
wildcard=YES
custom=yes, your.server1.com, your.server2.com