Does Facebook keep selecting the wrong thumbnail for WordPress links for you? The solution is to give Facebook some extra instruction about what image to use for the thumbnail, using open graph.
If you use a static frontpage, it’s a simple matter of adding something like:
<meta property="og:image" content="http://samplesite.com/files/2014/05/web-thumb.png" />
to the Full meta tags of your front page.
You can check what facebook will do with your site by using their link debugger: https://developers.facebook.com/tools/debug
WordPress >= 3.9.1 seems to do the right thing for posts according to my testing.
Using a method I previously wrote about, it’s quite easy to backup your Beaglebone Black over the network.
ssh root@bbb 'tar cf - / 2>/dev/null' | pv -cN tar \
| bzip2 | pv -cN bzip2 > bbb-backup-tar.bz2
It runs the bzip compression locally because presumably it will be faster than on the Beaglebone Black. I didn’t actually test that hypothesis though.
pv gives nice little indicators:
bzip2: 1.81MB 0:00:12 [ 995kB/s] [ <=> ]
tar: 36.2MB 0:00:12 [3.84MB/s] [ <=> ]
I had this problem where prints to our CUPS-PDF printer sometimes failed to be processed on the server. The job would disappear as though it has been printed but nothing else would happen. Printing from the same application to a Windows based PDF printer, and then printing the resulting PDF via Adobe Acrobat to the CUPS PDF printer would work fine. Printing the same PDF via Sumatra PDF to CUPS-PDF would also fail.
Further investigation revealed that the resulting print job files would differ. The jobs that fail looked like they contained a lot of binary data but the ones that succeeded looked like normal PDF files.
Then I discovered this entry in the Windows Event Viewer:
The document XXXX, owned by jason, failed to print on printer \\server\PDF. Try to print the document again, or restart the print spooler.
Data type: NT EMF 1.008. Size of the spool file in bytes: 2555904. Number of bytes printed: 0. Total number of pages in the document: 1. Number of pages printed: 0. Client computer: \\CLIENT. Win32 error code returned by the print processor: 0. The operation completed successfully.
Googleing that error took me to this RPi forum which had a solution buried down near the bottom. Thanks to Chemirocha for that tip. This bug has been plaguing me for a few years on and off!
I was receiving email error messages from cron like this from my autbackupmysql on a regular basis:
ERRORS REPORTED: MySQL Backup error Log for somehost.com.dxa - 2014-05-01_06h26m
-- Warning: Skipping the data of table mysql.event. Specify the --events option explicitly.
It turns out that mysqldump now warns you if the events table is not being dumped. So to get rid of the warning either ensure the table gets dumped when you do a backup or tell mysql explicitly not to dump it. I chose the former approach as it is a backup after all.
Simply add the following line to /etc/mysql/my.cnf
This tells the mysqldump program to explicity include the events table, and removes the warning. You can see a discussion about this option here.
If you are using debian, you will need to add that section to the /etc/mysql/debian.cnf file also as automysqlbackup uses that file for its configuration instead. See debian bug report for more details.