File Transfer & Windows 8.1

I want and need a better, faster interface for ftp and my blog server. Obvs needs to be encrypted. It seems that Windows 8.1 does not offer sftp natively but recommends WEBDAV. On my various clients I have multiple operating systems, but mainly Windows.

I had a quick poke around to see how to do this. Firstly I have an RTU for WISE-FTP through 1&1 which has great windows desktop integration but I usually use filezilla. (1and1 have moved away from WISE-FTP)

MS offer a One Drive cloud service as do Dropbox, but I want to use my 1and1 file system. (Perhaps the answer is wise ftp since that’s what they offer). Have Microsoft left this gap to enable third party products to fill it as part of non monopolistic behaviour, but surely doing the desktop integration requires the licensing of engineering rights and documentation.

Anyway Google doesn’t seem much help. Odd

I have for the moment decided to use filezilla with the –site-manager command line item. NB the — is what is used on windows as well as shell based command lines. I can find ways to go straight to my site but these require placing the password inside the shortcut.


I have made this icon –

filezilla menuThis is a .png version, I used convertico to make a .ico

Ruggedising the Internet

Mike Masnick writes a little article forecasting the engineers re-writing the single points of failure out of the internet. He entitles his article, Building A More Decentralized Internet: It’s Happening Faster Than People Realize. He cross references to two articles written by himself back in 2010, Operation Payback And Wikileaks Show The Battle Lines Are About Distributed & Open vs. Centralized & Closed and The Revolution Will Be Distributed: Wikileaks, Anonymous And How Little The Old Guard Realizes What’s Going On in which he, more accurately, recognises the current and future power of distributed and private networks. It should be remembered that these predictions all occurred before the Arab spring and the recent protests in Turkey and the state responses to the use of networks.

Masnick predicts that the judicial and informal non-judicial attacks on certain sites on the internet will lead to an engineering response and that the single points of failure will be remediated. He points at an article in the New Yorker, The Mission to Decentralize the Internet, which discusses the barriers to mass adoption of superior distributed solutions and some of the ideological history.

One of the responses to today’s challenges is at this manifesto for an Internet for the 21st Century, which is hosted at, with the hashtag #ybti, an interesting identification of the inadequacies of even the best today. I also need to check out the key note proceedings of the 30C3, the Chaos Computer Club; not sure if any of these act as an alternative manifesto. The manifesto calls for,

Our concept for a new Internet is based on the following design principles:

• ubiquitous end-to-end encryption, removing the necessity to trust any third parties that might access our data while it is being transmitted or stored
• obfuscation of transmission patterns, preventing the analysis of social relations, behavior patterns and topical interests of the participants in a network
• decentralized authentication mechanisms, removing the necessity to trust centralized certification authorities that can be compromised
• multicast technology, because we need to interconnect billions of users without the need for centralized server farms
• distributed data flow and storage, making bulk collection of data economically unattractive
• consistent use of free and open software, putting the system under permanent public scrutiny and giving users control over their computation

The comments in Mike’s article are gratefully short of the usually bile about piracy and at least one contributor points at DNS as one of the choke points. A contributor called ninja says,

One of the next steps on the Internet that must take priority is the development of a decentralized DNS system that can be trusted. And encrypted. There are many developments in the DNS field such as the recent DNSSEC and that OpenDNS initiative to encrypt DNS queries (I’m using it but I honestly don’t know how to check if it works!). Then bittorrent will evolve into a huge cloud hdd making it virtually impossible to take down files from that big cloud. I’m guessing tor may evolve into something that will be used everyday too to ensure privacy and anonymity.

and so adds a storage medium to the list of SPOFs.

One of the replies to the comment about DNS points at Zooko’s triangle. I documented my researches on P2P DNS at this article on this wiki which like the New Yorker article point at Bitcoin’s name services, Namecoin.

Interesting initiatives obviously include TOR and the EFF pointed me at the Tahoe-FS, which has its home here…. The pirate browser and Diaspora suggest with TOR that peer-to-peer is the way to go but the stranglehold that the ISPs have on connectivity in the US and Europe will remain a choke point. Another initiative I discovered while writing this article is Project Meshnet. We or maybe our municipalities will need to build peer to peer connectivity, which may work well and easily in the towns, but will be harder to build in rural areas. DIY is hard since the use of the radio spectrum is highly regulated but I know that the anti-HADOPI campaigners and some US municipalities have considered building mesh networks from wifi or wifi max appliances; in the UK this is currently frowned on by the ISPs and inhibited by the Digital Economy Act although this is struggling to become Law. (I need to remember the story about someone switching their hub OS where they had originally used Linux because the radio spectrum regulator didn’t want the radio ASIC device driver source published because it allowed an illegal and unlicensed use of the spectrum).

While tidying up the office, I came across a ghard copy of this, “Decentralized Infrastructurefor Wikileaks”, which has some good ideas.

My personal experiences recently are firstly in moving into a flat in London, where I was legally able to piggy back of my neighbours connections using BT WiFi and alternatively, the difficulties friends living in more rural areas have found in getting connected. At the moment only massive multi-national corporations can afford the cable or satellite networks that alllow the internet’s connectivity but it’s possible the entry point is coming down, shown the way by Facebook’s purchase of Ascenta. to begin to execute on the vision expressed in this white paper by Mark Zuckerberg.

When will they give up with the Digital Economy Act? (It’s coming up to it’s 4th anniversary and they still have no time table for its implementation.)


Bruce Schneier points to Whatsapp’s adoption of end to end encryption for all content. The comments are as ever worth reading and don’t degenerate into foolish argument. I like, “Encryption is a honeypot”, encrypted broadcasting kills the usefulness of meta data and the idea of running Whatsapp over TOR. DFL 9 Apr 2016

I have installed the Related Articles plugin and between me and it, the following links might be useful.


Looking at DNS and the attempt to P2P it.

Peter Sunde launched a project, reported at Computer World in an article called “P2P DNS to take on ICANN after US domain seizures”

It seems to have got stuck. This article dated 18 Oct 2011 and called Continuing the Distributed DNS System on Slashdot has some pointers. See also P2P-DNS taking control of the Internet  at

The nearest successor seems to be namecoin, see , & its wikipedia page

While researching this I came across a page on alternate roots at Wikipedia.

Configuring NTP

I want to configure NTP on this box, i.e. the Cobalt Qube as its losing time. Badly.

Dhis is now done, I have a very simple ntp.conf file and am using DNS hostnames. This is not advisable under Linux because you must have a valid DNS service available when the daemon seeks to resolve the addresses. It might be possible to resolve the dns names vis the /etc/hosts file. The Howto article below is quite good.

The test should be ntpq -p to see if the deamon is working Ok, I don’t think the Cobalt ntpd script does this; it browses the process table.


  1. check ntp isn’t running
  2. Add the server lines to /etc/ntp.conf, you really need two. Use time servers from organisations that permit or don’t care that one’s taking a feed.
  3. Enable port 123/udp on the firewall
  4. Start the daemon
  5. Test the service using ntpq, can you see all the configured servers
  6. If the drift is significant from the time server, then take the service down and then use ntpdate -u to set the clock

The Linux chkconfig utility is set up for the rc script and I shall therefore invoke it using chkconfig -add.

I have found the following links

I returned to this in 2011, and found It’s all got a lot easier.

Municipal WiFi

In Jan 2012, the Telegraph ran a story on how Westminster and Kensington & Chelsea boroughs have agreed with O2 to build the world’s biggest free wifi network, this is mirrored at this thread at South East Central.

  • Municipal Urban WiFi at Wikipedia, includes a list of Cities with fee Citywide WiFi, in the UK, Bristol and Norwich. (Liverpool has a paid service and the funding status for Newcastle in County Down is unstated.)

San Francisco

San Francisco famously experimented with free city wide WiFi. It was started with quite a splash and I was visiting it on a frequent basis. It seems they have suspended municipal investment in the programme in 2007 and invest in more directed programme to resolve the digital divide.

Socket Programming

It has to be Python

Don’t ask.


ddclient is a program for Linux that negotiates with to allow systems with dynamic tcp/ip addresses to have static DNS names.

Originally written in 2011, and revisited this in 2013


I made a new Amazon machine. Don’t muck around, install ddclient using apt-get. The post install script now takes you through the critical configuration questions and writes a .conf file.

If you make the AMI first, you have the AWS public IP address from the public ip name. The curl command below also still works

  1. Make a dns name at, use the AMI public ip address
  2. Install ddclient using apt-get
  3. Choose
  4. Provide your login/password
  5. Select
  6. Use select from list
  7. Select your domain name
  8. Wait for the name to propagate <- most imortant


I decided to use ddclient to keep a constant name on my Amazon EC2 Machine. This was an Ubuntu image. I therefore have the choice of using the package manager or a tarball install, although the latter is not automated.

Obtaining the Package from dyndns

I originally downloaded it from and installed it on my, now defunct, Cobalt Qube.

This section was written in 2011 when I replaced my Cobalt Qube with an AMI.

I used wget to pull down This was version 3.7.3. It has a README which documents how to install the client. I used update-rd.d to install the start/stop script which I called rc.ddclient. I used the debian sample as my model. This required the addition of the LSB Init Headers. I added the login credentials to the appropriate configuration file.

  • The guys at ddclient say I didn’t look hard enough for an LSB compliant script.
  • The default configuration file is held at /etc/ddclient/ddclient.conf.

ddclient needs a directory, /var/cache/ddclient, it doesn’t make it in the initialisation scripts, so this needs to be done by hand.

I have an Ubuntu image so I needed to install the PERL SSL libraries.

sudo apt-get install libio-socket-ssl-perl

other distros may have this already, and using the package manager does this for you.

N.B. ddclient tries to mail its messages. My server doesn’t have sendmail configured. This seems to be configurable using parameters in the config file. <say more here>

Using an Ubuntu package

ddclient is available as a package and can be installed the usual way

apt-get install ddclient

It doesn’t have an EC2 option and the post install script is interactive. I chose the simplest I could and then edited the config files by hand.

It locates stuff in different places as Ubuntu usually does.

It has a script conf file in /etc/default/ddclient, this is a shell and impacts the initscript. It has three switches which determine if it runs as a daemon and how frequently it polls.

The default conf file is /etc/ddclient.conf not /etc/ddclient/ddclient.conf. I have linked this location to my conf file in /etc/ddclient. So I now meet both standards, and I hope can use apt-get to keep the package up to date.

Discovering the instance public address

In addition to the instructions in the ddclient README, I needed to look at the instructions documented at Amazon EC2 – What You May Not Have Known a blog article at This details the magic runes required to make ddclient work for an amazon ec2 instance. It clearly needs the public ip address and uses the ddclient -cmd tool to get it.

use=cmd, cmd='curl'

Microsoft RDP & Virtual Box

At some point Virtual Box came with RDP as part of the set up and its allegedly faster than VNC. I want to connect my ipodtouch to my PC’s using Mocha’s RDP Lite.


I have a windows 7 beta VM. (See my Sun blog.) Initially I couldn’t connect using Ipod or the Alienware.

I am now connected using the Alienware and the Microsoft client. The Host is XP Home Edition, SP/3 with a W7 VM hosted in VB 2.2.4, networking = bridged, port = !3389, with the NULL authentication libraries. NB the port is not available on the internet. It does not work with external authentication, and the manual suggests that guest is experimental.

Perry says I need to have the VRDPAuth.dll library in a folder that is pointed to by the %PATH variable. This page at explains how to do it.

I don’t know if port 3389 will work or not because of the order of the testing, but the Mochasoft Client is still not working. They have a FAQ. I can’t get it to work on the EDGE either. Mochasoft suggest an incompatibility or a firewall as the problem. So

  • fix the authentication problems and turn it on
  • sort out the mochasoft problems

What didn’t work!

I had assumed that the initial failures were due to the failure to present the RDP port to the LAN and I tried to map the VM port to the real port. My VM was a NAT machine. I wrote about port mapping on my blog when I exposed apache to my network. I need to port the script, maybe now is the time to wrap it in TCL. The Virtual Box 2.1.4 manual discusses port forwarding in Section 6.1.4. This fails. The W7 image fails to boot; using [gs]etextradata to map port 3389 from the guest to the host causes the VM to fail to boot. I have amended the VM config to chnage the port as suggested by this thread at and this blog at I still get “your remote session has ended”. PerryG says you must use bridged networking.


N.B. The Virtual Box manual is the first port of call. Otherwise these might be usefull.


Not just for file sharing, but also supports printing and name services. I now make it part of my standard Linux builds within Virtual Box. I can then use VB shared folders or the host OS virtual file systems. (Is this true for Mac OS? Need to test it).


Some useful links,


The 5.2 Centos Deployment guide has a Chapter on SAMBA


Linksys WAG 54GS & Power Offs

Household Power Test

I tested the household power yesterday and turned the Gateway off for 2 hours.

Resetting Factory Defaults

When it came up the wireless wouldn’t work. So I rang Linksys and they talked me through resetting factory defaults.

For those of you following me, you must document your encapsulation and consequent parameters, your wireless settings and encryption passwords and any firewall ports that are open.

It seems that the reliability of a system boot after a sustained power down is not 100%.

Powering Down

I was recommended to power down the gateway by removing the power lead from the appliance. I assume using the switch on the wall plug is equally effective. Don’t use the power switch on the front of the box. This is a deeply flawed piece of UI design 🙁