# I wrote to Turnbull to complain about the lack of NBN. Here is his reply

After traveling to New Zealand for #lca2015 and experiencing fibre to the home at my friends house, I decided to write to Malcolm Turnbull to express my concern about the slow rollout of the NBN in Australia.

He actually personally replied directly to my email only 4 hours later. In the interest of openness I am putting the whole email below. I have rearranged it to fix the topposting.

The tl;dr is “It’s all Labor’s fault”

My email to Mal. (I think we can say we are on first name terms now)

From: Jason Lewis
Sent: Tuesday, 20 January 2015 1:52 PM
To: Turnbull, Malcolm (MP)
Subject: NBN Roll-out

Dear Mr Turnbull,

I’m writing to express my concern with the woefully slow roll-out of the
NBN.

I’ve recently been travelling in other countries where fibre to the home

I feel this will lead to a further reduction in Australia’s
international competitiveness.

Please devote more resources to speeding up the roll-out.

Thanks,

Jason

And his response:

Dear Jason,

Unfortunately Labor significantly underestimated the cost and complexity of this project and, as a result, released rollout schedules that were unrealistic and inaccurate.

For instance, Labor originally forecast that 2.7 million houses would be passed by fibre by 30 June 2014. In NBN Co’s last Corporate Plan released in August 2012, it was revised to 1.3 million houses passed by fibre. The comprehensive Strategic Review completed in December 2013 however, found that the NBN would only pass 467,000 houses with fibre by 30 June 2014. The actual number reached by 1 July 2014 was 492,000 premises – less than one-fifth of the original target.

I highlighted the problems which Labor created with their false rollout expectations in a recent blog available here: www.malcolmturnbull.com.au/media/trying-to-make-sense-of-the-confused-and-confusing-mr-clare1

In addition, the Government recently undertook a Broadband Availability and Quality Report, which found that there are more than 1.6 million premises across Australia with very poor or no fixed broadband access at all. However, Labor made no effort to prioritise these areas in their rollout.

The Government has instructed NBN Co to revise its current rollout schedule to meet three key objectives:

1. To ensure that the underserved areas in Australia are prioritised in the NBN Co rollout. On average, areas with very poor broadband will receive upgrades two years sooner.
2. To ensure that the NBN upgrades are delivered sooner and more affordably, by using a mix of technologies. The Strategic Review found that under the model adopted by the Coalition, the project will be finished four years sooner than would have otherwise been the case.
3. To ensure that information provided in the public domain is accurate and can be relied upon by businesses and households waiting for broadband upgrades.

The fact is that the NBN, up to the election, had reached only three per cent of Australian premises after four years and $6.4 billion of funding. NBN Co is now determining which technologies are most cost effective and should be utilised on an area-by-area basis so as to minimise peak funding, optimise economic returns and deliver broadband upgrades sooner. This is highly demanding and complex work which involves renegotiating deals with Telstra and Optus to take over portions of their fixed line networks. Naturally this is taking time to complete. In the meantime, NBN Co has continued to expand its fibre network in areas where construction contracts had been signed at the time of the election. Across the country, more than 500,000 premises have been passed by NBN fibre and work is underway to extend the network to reach a further 600,000 premises. Sites that are not currently represented on the NBN Co maps are being reviewed in line with rollout priorities. Importantly, the network will continue to be built on a state-of-the-art platform, but will use existing infrastructure where it makes sense to do so. In areas where work has not commenced, NBN Co will be making decisions about construction on the basis of review findings. NBN Co is significantly advanced in planning the multi-technology mix and rollout plans will be publicly released once they are completed. More information about the NBN rollout will be published by NBN Co on its website www.nbnco.com.au when it is available. You may also be interested to know that the Government has secured the progressive transfer of the necessary parts of Telstra’s copper and HFC (pay TV) cable networks to NBN Co at no additional expense. Telstra’s 1.4 million shareholders have been ‘kept whole’ in keeping with the Government’s pre-election commitment. These agreements are a key milestone in shifting the NBN to the Multi Technology Mix the company has determined is its optimal strategy. The December 2013 Strategic Review found the shift to a multi-technology NBN will reduce its cost by$30 billion, and save at least four years in construction time.
Importantly, under these agreements, NBN Co will make use of Telstra’s copper and HFC networks (i.e. the network used to deliver pay television) rather than decommissioning and wasting these assets, as Labor had planned.

Yours sincerely,

Malcolm Turnbull

# #lca2015: Using FOI to get source code

Michael Cordover has spent significant time and money chasing down the code used for counting Australian Election results. He goes into the reasons why that is important.

Another great talk from #lca2015.

# I just donated to Wikipedia

After reading a blog post from someone who donated to Wikipedia because they use it a lot, I realised I use it a lot too, so I decided to donate too. Unfortunately I can’t quite remember who’s post it was, but thank you whoever you are!

define('WP_FAIL2BAN_BLOCKED_USERS','^organictrader$'); See the readme for more details about what they do The second part is enabling filters and gaols in fail2ban. Luckily this is also provided by the WP fail2ban plugin. Copy the wordpress.conf file from the wp-fail2ban directory to the fail2ban config directory: ~# cp /var/www/wp-content/plugins/wp-fail2ban/wordpress.conf \ /etc/fail2ban/filter.d ~# Then edit /etc/jail.local and insert: [wordpress] enabled = true filter = wordpress logpath = /var/log/auth.log # set the ban time to 1 hour - probably could be even higher for good measure bantime = 3600 # needed for debian wheezy otherwise fail2ban doesn't start and reports # errors with the config port = http,https Now restart fail2ban: ~# /etc/init.d/fail2ban restart [ ok ] Restarting authentication failure monitor: fail2ban. ~# Remove the block on the xmlrpc.php file from your apache config and restart apache. Then you should see in your fail2ban logs something like: 2014-08-09 23:18:30,405 fail2ban.actions: WARNING [wordpress] Ban 117.195.37.14 2014-08-09 23:20:49,090 fail2ban.actions: WARNING [wordpress] Ban 78.97.220.237 2014-08-09 23:20:50,108 fail2ban.actions: WARNING [wordpress] Ban 46.108.226.105 2014-08-09 23:21:04,162 fail2ban.actions: WARNING [wordpress] Ban 120.28.140.93 2014-08-09 23:21:28,206 fail2ban.actions: WARNING [wordpress] Ban 175.142.187.77 2014-08-09 23:21:36,234 fail2ban.actions: WARNING [wordpress] Ban 88.240.97.76 2014-08-09 23:21:36,294 fail2ban.actions: WARNING [wordpress] Ban 122.177.229.110 2014-08-09 23:21:44,346 fail2ban.actions: WARNING [wordpress] Ban 89.106.102.15 2014-08-09 23:21:46,400 fail2ban.actions: WARNING [wordpress] Ban 2.122.219.188 2014-08-09 23:21:52,423 fail2ban.actions: WARNING [wordpress] Ban 95.69.53.13 2014-08-09 23:22:12,488 fail2ban.actions: WARNING [wordpress] Ban 5.12.12.66 2014-08-09 23:22:12,509 fail2ban.actions: WARNING [wordpress] Ban 182.182.89.23 2014-08-09 23:22:42,564 fail2ban.actions: WARNING [wordpress] Ban 178.36.126.249 2014-08-09 23:22:53,590 fail2ban.actions: WARNING [wordpress] Ban 36.83.125.10 2014-08-09 23:22:53,607 fail2ban.actions: WARNING [wordpress] Ban 95.231.59.185 I found however that I was being hit from over 1800 unique IP addresses and despite fail2ban successfully banning them, it was taking too long to ban enough that the load would return to normal so I re-blocked the xmlrpc.php file for 24 hours. After that, I enabled it and it seemed as though the DDoS had gone away. So far so good. # Howto quickly find your Beaglebone Black’s IP address Whenever I connect my Beaglebone Black (BBB) to a network, I have to work out it’s IP address so I can ssh into it. This can be tricky. Some of your options are: 1. connect to the serial terminal, or connect over the usb network interface which gives the BBB the address 192.168.7.2, log in and issue the command ip addr. 2. use nmap to try and search out its IP address on your subnet but this is time consuming and not very accurate I have found. 3. use avahi-browse -rat (thanks Madox for that tip.) Last night I came up with a Better Way™. Rather than trying to determine the BBB’s address, why not use a fully qualified domain name and a dynamic dns service? I could then just type ssh myfqdn.com or whatever and log in. Think how that would simplify one’s life! To implement this, set up a dynamic DNS somewhere with a FQDN for your BBB. If you happen to have your own domain name already you can use a sub-domain from that. I think its fairly common for DNS hosts to offer an API to update your IP address. I happen to use Rimu Hosting and they have their own simple web api. Then you just need to write a little script to update the IP address every time the DHCP client receives a new IP address, and drop it into /etc/dhcp/dhclient-exit-hooks.d/ Here is my script. This will only work with Rimu Hosting as they have their own privately developed API, and you’d need to insert your own KEY into the script.  #!/bin/bash # update ip address with rimu hosting. See https://rimuhosting.com/dns/dyndns.jsp if [[ ! -z${new_ip_address} ]] then echo $(date +"%F %T")${new_ip_address} >> /root/ddns.txt curl "https://rimuhosting.com/dns/dyndns.jsp?action=SET&name=clock.emacstragic.net&value=\${new_ip_address}&type=A&api_key=XXX" else echo "got no ip" fi

## Update:

I discovered this didn’t work at home. Turns out that dnsmasq in OpenWRT is set to ignore this kind of dns request, due to potential security risks. There is a solution to that. Add a list rebind_domain line to your /etc/config/dhcp line on the router.

config dnsmasq
.
.
.
list    rebind_domain 'clock.emacstragic.net'

Thanks to Michal Čihař for the solution to that.

Does Facebook keep selecting the wrong thumbnail for WordPress links for you? The solution is to give Facebook some extra instruction about what image to use for the thumbnail, using open graph.

If you use a static frontpage, it’s a simple matter of adding something like:

<meta property="og:image" content="http://samplesite.com/files/2014/05/web-thumb.png" />

to the Full meta tags of your front page.

WordPress >= 3.9.1 seems to do the right thing for posts according to my testing.

Using a method I previously wrote about, it’s quite easy to backup your Beaglebone Black over the network.

ssh root@bbb 'tar cf - / 2>/dev/null' | pv -cN tar \
| bzip2 | pv -cN bzip2 > bbb-backup-tar.bz2

It runs the bzip compression locally because presumably it will be faster than on the Beaglebone Black. I didn’t actually test that hypothesis though.

pv gives nice little indicators:

bzip2: 1.81MB 0:00:12 [ 995kB/s] [      <=>                               ]
tar: 36.2MB 0:00:12 [3.84MB/s] [                             <=>        ]

# Printing from Windows to a samba shared CUPS-PDF printer sometimes fails

I had this problem where prints to our CUPS-PDF printer sometimes failed to be processed on the server. The job would disappear as though it has been printed but nothing else would happen. Printing from the same application to a Windows based PDF printer, and then printing the resulting PDF via Adobe Acrobat to the CUPS PDF printer would work fine. Printing the same PDF via Sumatra PDF to CUPS-PDF would also fail.

Further investigation revealed that the resulting print job files would differ. The jobs that fail looked like they contained a lot of binary data but the ones that succeeded looked like normal PDF files.

Then I discovered this entry in the Windows Event Viewer:

The document XXXX, owned by jason, failed to print on printer \\server\PDF. Try to print the document again, or restart the print spooler.
Data type: NT EMF 1.008. Size of the spool file in bytes: 2555904. Number of bytes printed: 0. Total number of pages in the document: 1. Number of pages printed: 0. Client computer: \\CLIENT. Win32 error code returned by the print processor: 0. The operation completed successfully.

Googleing that error took me to this RPi forum which had a solution buried down near the bottom. Thanks to Chemirocha for that tip. This bug has been plaguing me for a few years on and off!

# automysqlbackup ERRORS REPORTED: MySQL Backup error Log Warning: Skipping the data of table mysql.event. Specify the –events option explicitly.

I was receiving email error messages from cron like this from my autbackupmysql on a regular basis:
ERRORS REPORTED: MySQL Backup error Log for somehost.com.dxa - 2014-05-01_06h26m
-- Warning: Skipping the data of table mysql.event. Specify the --events option explicitly.

It turns out that mysqldump now warns you if the events table is not being dumped. So to get rid of the warning either ensure the table gets dumped when you do a backup or tell mysql explicitly not to dump it. I chose the former approach as it is a backup after all.
Simply add the following line to /etc/mysql/my.cnf

[mysqldump]
...
...
...
events

This tells the mysqldump program to explicity include the events table, and removes the warning. You can see a discussion about this option here.

If you are using debian, you will need to add that section to the /etc/mysql/debian.cnf file also as automysqlbackup uses that file for its configuration instead. See debian bug report for more details.