2017-02-28

How to avoid unnecessary copies when appending to a C++ vector

This blog post explains how to avoid unnecessary copies when appending to a C++ std::vector, and recommends the fast_vector_append helper library, which eliminates most copies automatically.

TL;DR If you are using C++11, and your element classes have an efficient move constructor defined, then just use push_back to append, it won't do any unnecessary copies. In addition to that, if you are constructing the to-be-appended element, you can use emplace_back to append, which even avoids the (otherwise fast) call to the move constructor.

Copying is slow and needs a lot of (temporary memory) if the object contains lots of data. Such an object is a long std::string: the entire array of characters get copied to a new array. This hurts performance if the copy is unnecessary, e.g. if only a temporary copy is made. For example:

std::string create_long_string(int);

std::vector<std::string> v;
{
  // Case A.
  std::string s1 = create_long_string(1);
  std::string s2 = create_long_string(2);
  std::string s3 = create_long_string(3);
  // Case B.
  v.push_back(s1);
  std::cout << s1;
}
// Case C.
v.push_back("foo");
// Case D, from C++11.
v.emplace_back("foo");
// Case E.
v.push_back(create_long_string(4));
// Case F.
v.push_back(std::string()); v.back().swap(s2);
// Case G, from C++11.
v.push_back(std::move(s3));

In Case A, return value optimization prevents the unnecessary copying: the string built in the function body of create_long_string is placed directly to s1.

In Case B, a copy has to be made (there is no way around it), because v is still valid after s1 is destroyed, thus it cannot reuse the data in s1.

Case C could work without a copy, but in C++98 an unnecessary copy is made. So first std::string("foo") is called (which makes a copy of the data), and then the copy constructor of std::string is called to create a new string (with a 2nd copy of the data), which gets added to v.

Case D avoids the 2nd (unnecessary) copy, but it works only from C++11. In earlier versions of C++ (such as C++98), std::vector doesn't have the emplace_back method.

In Case E, there is an unnecessary copy in C++98: create_long_string creates an std::string, and it gets copied to a new std::string within v. It would be better if create_long_string could create the std::string to its final location.

Case F shows the workaround in C++98 of adding s2 to an std::vector without a copy. It's a workaround because it's a bit ugly and it still involves some copying. Fortunately this copying is fast: it copies only the empty string. As a side effect, the value of s2 is lost, it will then be the empty string.

Case G shows the C++11 way of adding s3 to an std::vector without a copy. It doesn't work in C++98 (there is no std::move in C++98). The std::move(s3) visibly documents that the old value of s3 is lost.

C++11 (the version of C++ after C++98) introduces rvalue references, move constructors and move semantics to avoid unnecessary copies. This will fix both Case C and Case E. For this to work, new code needs to be added to the element class (in our case std::string) and to the container class (in our case std::vector) as well. Fortunately, the callers (including our code above and the body of create_long_string) can be kept unchanged. The following code has been added to the C++ standard library (STL) in C++11:

class string {
  ...
  // Copy constructor. C++98, C++11.
  string(const string& old) { ... }
  // Move constructor. Not in C++98, added in C++11.
  string(string&& old) { ... } ... }
};

template<typename T, ...>
class vector {
  ...
  // Takes a const reference. C++98, C++11.
  void push_back(const T& t);
  // Takes an rvalue reference. Not in C++98, added in C++11.
  void push_back(T&& t);
};

As soon as both of these are added, when v.push_back(...) will attempt to call the 2nd method (which takes the rvalue reference), which will call to move constructor of std::string instead of the copy constructor. This gives us the benefit of no copying, because typically the move constructor is fast, because it doesn't copy data. In general, the move constructor creates the new object with the data of the old object, and it can leave the old old object in an arbitrary but valid state. For std::string, it just copies the pointer to the data (which is fast, because it doesn't copy the data itself), and sets the pointer in the old std::string to nullptr. Thus Case C and Case E become fast in C++11. Case B is not affected (it still copies), and that's good, because we want to print s1 to cout below, so we want that data there. This happens automatically, because in the call v.push_back(s1), s1 is not an rvalue reference, thus the cost-reference push_back will be called, which does a copy. For more details about the magic to select the proper push_back, see this tutorial or this tutorial.

Guidelines to avoid unnecessary copies

Define your (element) classes like this:

  • Define the default constructor (C() { ... }).
  • Define the destructor (~C() { ... }).
  • Define the copy constructor (C(const C& c) { ... }).
  • It's a good practice to define operator=, but not needed here.
  • For C++11 classes, define a move constructor (e.g. C(C&& c) { ... }).
  • For C++11 classes, don't define a member swap method. If you must define it, then also define a method void shrink_to_fit() { ... }. It doesn't matter what the method does, you can just declare it. The fast_vector_append library detects shrink_to_fit, and will use the move constructor instead of the swap method, the former being slightly faster, although neither copies the data.
  • For C++98 classes, don't define a move constructor. In fact, C++98 doesn't support move constructors.
  • For C++98 classes, define a member swap method.

To append a new element to an std::vector without unnecessary copying, as fast as possible, follow this advice from top to bottom:

  • If it's C++11 mode, and the object is being constructed (not returned by a function!), use emplace_back without the element class name.
  • If it's C++11 mode, and the class has a move constructor, use push_back.
  • If it's C++11 mode, and the class has the member swap method, use: { C c(42); v.resize(v.size() + 1); v.back().swap(c); }
  • If the class has the member swap method, use: { C c(42); v.push_back(C()); v.back().swap(c); }
  • Use push_back. (This is the only case with a slow copy.)

Automating the avoidance of unnecessary copies when appending to a vector

It would be awesome if the compiler could guess the programmer's intentions, e.g. it would pick emplace_back if it is faster than push_back, and it will avoid the copy even in C++98 code, e.g. it will use swap if it's available, but the move constructor isn't. This is important because sometimes it's inconvenient to modify old parts of a codebase defining the element class, and it already has swap.

For automation, use fast_vector_append(v, ...) in the fast_vector_append library to append elements to an std::vector. It works in both C++98 and C++11, but it can avoid more copies in C++11. The example above looks like:

#include "fast_vector_append.h"
std::string create_long_string(int);

std::vector<std::string> v;
{
  // Case A. No copy.
  std::string s1 = create_long_string(1);
  std::string s2 = create_long_string(2);
  std::string s3 = create_long_string(3);
  // Case B. Copied.
  fast_vector_append(v, s1);
  std::cout << s1;
}
// Case C. Not copied.
fast_vector_append(v, "foo");
// Case D. Not copied.
fast_vector_append(v, "foo");
// Case E. Copied in C++98.
fast_vector_append(v, create_long_string(4));
{ std::string s4 = create_long_string(4);
  // Case E2. Not copied.
  fast_vector_append_move(v, s4);
}
// Case F. Not copied.
fast_vector_append_move(v, s2);
// Case G. Not copied.
fast_vector_append_move(v, s3);
// Case H. Copied in C++98.
fast_vector_append(v, std::string("foo"));

Autodetection of class features with SFINAE

The library fast_vector_append does some interesting SFINAE tricks to autodetect the features of the element class, so that it will be able to use the fastest way of appending supported by the class.

For example, this is how it detects whether to use the member swap method:

// Use swap iff: has swap, does't have std::get, doesn't have shrink_to_fit,
// doesn't have emplace, doesn't have remove_suffix. By doing so we match
// all C++11, C++14 and C++17 STL templates except for std::optional and
// std::any. Not matching a few of them is not a problem because then member
// .swap will be used on them, and that's good enough.
//
// Based on HAS_MEM_FUNC in http://stackoverflow.com/a/264088/97248 .  
// Based on decltype(...) in http://stackoverflow.com/a/6324863/97248 .
template<typename T>   
struct __aph_use_swap {
  template <typename U, U> struct type_check;
  // This also checks the return type of swap (void). The checks with
  // decltype below don't check the return type.
  template <typename B> static char (&chk_swap(type_check<void(B::*)(B&), &B::swap>*))[2];
  template <typename  > static char (&chk_swap(...))[1];
  template <typename B> static char (&chk_get(decltype(std::get<0>(*(B*)0), 0)))[1];  
// ^^^ C++11 only: std::pair, std::tuple, std::variant, std::array. template <typename > static char (&chk_get(...))[2]; template <typename B> static char (&chk_s2f(decltype(((B*)0)->shrink_to_fit(), 0)))[1];
// ^^^ C++11 only: std::vector, std::deque, std::string, std::basic_string. template <typename > static char (&chk_s2f(...))[2]; template <typename B> static char (&chk_empl(decltype(((B*)0)->emplace(), 0)))[1];
// ^^^ C++11 only: std::vector, std::deque, std::set, std::multiset, std::map, std::multimap, std::unordered_multiset, std::unordered_map, std::unordered_multimap, std::stack, std::queue, std::priority_queue. template <typename > static char (&chk_empl(...))[2]; template <typename B> static char (&chk_rsuf(decltype(((B*)0)->remove_suffix(0), 0)))[1];
// ^^^ C++17 only: std::string_view, std::basic_string_view. template <typename > static char (&chk_rsuf(...))[2]; static bool const value = sizeof(chk_swap<T>(0)) == 2 && sizeof(chk_get<T>(0)) == 2
&& sizeof(chk_s2f<T>(0)) == 2 && sizeof(chk_empl<T>(0)) == 2
&& sizeof(chk_rsuf<T>(0)) == 2; };

The autodetection is used like this, to select one of the 2 implementations (either with v.back().swap(t) or v.push_back(std::move(t))):

template<typename V, typename T> static inline
typename std::enable_if<std::is_same<typename V::value_type, T>::value &&
    __aph_use_swap<typename V::value_type>::value, void>::type
fast_vector_append(V& v, T&& t) { v.resize(v.size() + 1); v.back().swap(t); }                               

template<typename V, typename T> static inline
typename std::enable_if<std::is_same<typename V::value_type, T>::value &&
    !__aph_use_swap<typename V::value_type>::value, void>::type
fast_vector_append(V& v, T&& t) { v.push_back(std::move(t)); }

2017-02-24

How to back up your WhatsApp chats and photos safely on Android

This blog post explains how to make backups of your WhatsApp chats and photos safely on your Android device, and how to restore your backups. By safely we mean that you won't lose data unless you remove some backup files manually.

WhatsApp saves all chats to the WhatsApp/Databases folder on the phone's storage (sdcard), and it saves all photos and other media files to the WhatsApp/Media folder. (In fact, from the chats only the file WhatsApp/Databases/msgstore.db.cryptNNN may be needed, where NNN is an integer, currently 12.) If you make a copy of these folders, and copy them back to a new or reinstalled Android device before installing WhatsApp, then this effectively restores the backup, WhatsApp will recognize and use these files the first time it's installed (you need to tap on the Restore button within WhatsApp). See this FAQ entry for more information on restoring WhatsApp backups on Android.

WhatsApp supports creating backups to Google Drive (automatically, every day), and restoring those backups when the app is (re)installed. This sounds like convenient and safe, but it's not safe, you can still lose your chat history and photos (see below how). So if you care about your WhatsApp chat history and photos, and you need an automated backup for them, here is my recommendation: use the FolderSync Lite free Android app to make automatic backups to the cloud (it supports Google Drive, DropBox and more than 20 other cloud providers). To restore the backup, you can use FolderSync Lite, or you can download the files and copy them to your Android device manually.

Here is how to set up FolderSync Lite on Android for automatic backups of WhatsApp chats, photos and other media:

  1. Create a Google account, open Google Drive, create a folder named FolderSyncBackup-WhatsApp, and within it create subfolders Databases and Media (both case sensitive). It can also be done similarly on Dropbox instead, but this tutorial focuses on Google Drive.
  2. Install FolderSync Lite to your Android device.
  3. Add your Google account to FolderSync lite.
  4. Create a folderpair for backing up chats:
    • Account: your Google account
    • Unique name: wad
    • Sync type: To remote folder
    • Remote folder: /FolderSyncBackup-WhatsApp/Databases/
    • Local folder: .../WhatsApp/Databases/
    • Use scheduled sync: yes
    • Sync itnerval: Daily
    • Copy files to time-stamped folder: no
    • Sync subfolders: yes
    • Sync hidden files: yes
    • Delete source files after sync: no
    • Retry sync if failed: yes
    • Only resync source files if modified (ignore target deletion): yes
    • Sync deletions: no
    • Overwrite old files: Always
    • If conflicting modifications: Skip file
    • Use WiFi: yes
    • (Many settings below are fine, skipped here.)
  5. Save the folderpair, and do the first sync manually.
  6. Create a folderpair for backing up media files, including photos:
    • Account: your Google account
    • Unique name: wam
    • Sync type: To remote folder
    • Remote folder: /FolderSyncBackup-WhatsApp/Media/
    • Local folder: .../WhatsApp/Media/
    • (Subsequent settings are the same as above.)
  7. Save the folderpair, and do the first sync manually.
  8. Optionally, you can turn of WhatsApp's automatic backup to Google Drive in the WhatsApp app's chat settings.
  9. To remove the WhatsApp's automatic backup files from Google Drive, go to Google Drive, click on the gear icon (Settings), click on Settings, click on Manage Apps, find and click on the Options off WhatsApp Messenger, click on Delete app data, and then click on Disconnect from Drive.

WhatsApp saves all chat history so far to a new file every day (file name pattern: WhatsApp/Databases/msgstore-????-??-??.*.db.crypt*). These files will accumulate and fill up your Google Drive quote in a year or two, so you may want to remove old files. You can do it manually on the Google Drive web UI, just visit the FolderSyncBackup-WhatsApp/Databases folder, select old files (by date), and remove them. Alternatively, you can automate removal using an Apps Script. Here is how:

  1. Visit http://script.google.com/
  2. You see a script editor window. Remove the existing code (function myFunction() { ... }).
  3. In the File / Rename... menu, rename the project to script to remove old WhatsApp backups
  4. Copy-paste the following code to the script editor window:
    var FOLDER_NAME = 'FolderSyncBackup-WhatsApp';
    // Please note that msgstore.db.crypt12 will always be kept in addition to the dated files.
    var MINIMUM_DBS_TO_KEEP = 8;  // A value smaller than 8 doesn't make sense, WhatsApp keeps that much, and these files would be reuploaded by FolderSync.
    var USE_TRASH = true;
    var DO_DRY_RUN = false;
    
    function getSingleFolder(foldersIter) {
      var folder = null;
      while (foldersIter.hasNext()) {
        if (folder != null) throw new Error("Multiple folders found.");
        folder = foldersIter.next();
      }
      if (folder == null) throw new Error("Folder not found.");
      return folder;
    }
    
    function getAll(iter) {
      var result = [];
      while (iter.hasNext()) {
        result.push(iter.next());
      }
      return result;
    }
    
    function compareByName(a, b) {
      var an = a.getName(), bn = b.getName();
      return an < bn ? -1 : an == bn ? 0 : 1;
    }
    
    function removeOldWhatsAppBackups() {
      Logger.log('<removeOldWhatsAppBackups>');
      Logger.log('Config FOLDER_NAME=' + FOLDER_NAME);
      Logger.log('Config MINIMUM_DBS_TO_KEEP=' + MINIMUM_DBS_TO_KEEP);
      Logger.log('Config USE_TRASH=' + USE_TRASH);
      Logger.log('Config DO_DRY_RUN=' + DO_DRY_RUN);
      // var folders = DriveApp.getFolders();  // Also lists subfolders.
      var folder = getSingleFolder(getSingleFolder(DriveApp.getRootFolder().getFoldersByName(FOLDER_NAME)).getFoldersByName('Databases'));
      var files = getAll(folder.getFiles());
      var sortedFiles = [];
      var i;
      for (i = 0; i < files.length; ++i) {
        var file = files[i];
        var name = file.getName();
        // Logger.log(name + ': ' + file.getDateCreated());  // No last-modification time :-(.
        if (name.match(/^msgstore-\d\d\d\d-\d\d-\d\d[.]/)) {
          sortedFiles.push(file);
        }
      }
      sortedFiles.sort(compareByName);  // The name reflects the date. Earlier file first.
      var toKeep = MINIMUM_DBS_TO_KEEP;
      if (toKeep < 1) toKeep = 1;
      for (i = 0; i < sortedFiles.length - toKeep; ++i) {
        var file = sortedFiles[i];
        // Logger.log('?? ' + file.getName() + ': ' + file.getSize());
        // A 100-byte decrease in file size is tolerable, can be a compression artifact.
        if (file.getSize() - 100 < sortedFiles[i + 1].getSize()) {
          if (DO_DRY_RUN) {
          } else if (USE_TRASH) {
            file.setTrashed(true);
          } else {
            DriveApp.removeFile(file);
          }
          // Utilities.sleep(100); // Prevent exceeding rate limit (currently 10 requests per second). Is it still in effect?
          Logger.log('-- Removing: ' + file.getName() + ': ' + file.getSize());
        } else {
          Logger.log('Keeping: ' + file.getName() + ': ' + file.getSize());
        }
      }
      for (; i < sortedFiles.length; ++i) {
        var file = sortedFiles[i];
        Logger.log('Keeping recent: ' + file.getName() + ': ' + file.getSize());
      }
      Logger.log('</removeOldWhatsAppBackups>');
    }
  5. Click on the File / Save menu to save the script.
  6. Open the Resources / All your triggers menu window, choose Add a new trigger, add this: removeOldWhatsAppBackups, Time-driven, Day timer, 5am to 6am, also add a notification to yourself in e-mail, sent at 6am.
  7. Click on Save to save the triggers.
  8. You may wonder if this Apps Script deletes chat backups in a safe way. You have to decide this for yourself. It keeps the 8 last backups, and it also keeps all backups after which the backup file size has decreased. This latter condition prevents the data loss case described below (in the next section).

That's it, automatic and safe backup of WhatsApp chats, photos and other media files to Google Drive is now set up using FolderSync Lite.

Is WhatsApp's built-in backup to Google Drive safe?

No it isn't, you can lose all your data in some cases. The data loss happened to a friend in Feb 2017 in the following way:

  • The Android phone was lost, thus the WhatsApp folder on the phone wasn't available.
  • There was a working and recent backup on Google Drive.
  • When WhatsApp was installed to a new phone, it started restoring the backup from Google Drive.
  • Shortly after the start of the restore, the internet connection broke and the restore was aborted. Only a little part of the messages were restored.
  • The owner of the phone didn't notice that many chats are missing, and started using WhatsApp.
  • Within 24 hours, WhatsApp created a new backup to Google Drive, overwriting the old, full data with the new partial data. At this point the majority of the old chats got lost.
  • A couple of days later the owner of the phone noticed, but it was too late.

If WhatsApp's built-in backup to Google Drive had been written in a way that it never overwrites old backups, it would have been possible to reinstall WhatsApp and restore all the chats without data loss. Unfortunately the users of WhatsApp have no control on how WhatsApp does backups. But they can use an alternative backup method which never loses data (see the method with FolderSync Lite above).

Another way to safely restore WhatsApp's built-in backup from Google Drive would be downloading it from Google Drive first, and keeping a a copy until WhatsApp has successfully restored everything on the new Android device. Unfortunately this is impossible, because it's impossible for the user to download their own backup from Google Drive (!), because it is saved to a hidden app folder, which only the WhatsApp app can read and write, and the user is unable to access it (apart from deleting it). This StackOverflow question has an answer which describes a cumbersome and fragile way for downloading, but this can easily change in the future, so I don't recommend it for general use. I recommend the method with FolderSync Lite above instead, which makes it easy for the user to see and download their WhatsApp backup from Google Drive.

2017-02-07

Checking whether a Unix filename is safe in C

This blog post explains how to check whether a filename is safe for overwriting the file contents on a Unix system, and it also contains C code to do the checks.

The use case is that an archive (e.g. ZIP) extractor is extracting an untrusted archive and creating and overwriting files. The archive may be created by someone malicious, trying to trick the extractor to overwrite sensitive system files such as /etc/passwd. Of course this would only work if the process running the extractor has the permission to modify system files (which it normally doesn't have, because it's not running as root). The most basic protection the extractor can provide is refusing to write to a file with an absolute name (i.e. starting with /), so that even if it's running as root, it would do no harm if it's not running in the root directory.

However, checking whether the first character is a / isn't good enough, because the attacker may specify a file with name ../.././../../../.././../tmp/../etc/././passwd, which is equivalent to /etc/passwd if the current directory isn't deep enough. Enforcing the following conditions makes sure that such attacks are not possible:

  • The filename isn't be empty.
  • The filename doesn't start or end with a /.
  • The strings . and .. are not present as a pathname component (i.e. as an item when the filename is split on /).
  • The filename doesn't contain a double slash: //.

Here is how to check all these in C:

int is_filename_safe(const char *p) {
  const char *q;
  for (;;) {
    for (q = p; *p != '\0' && *p != '/'; ++p) {}
    /* In the first iteration: An empty filename is unsafe. */
    /* In the first iteration: A leading '/' is unsafe. */
    /* In subsequent iterations: A trailing '/' is unsafe. */
    /* In subsequent iterations: A "//" is unsafe. */
    if (p == q ||
    /* A pathname component "." is unsafe. */
    /* A pathname component ".." is unsafe. */
        (*q == '.' && (q + 1 == p || (q[1] == '.' && q + 2 == p)))) {
      return 0;  /* Unsafe. */
    }
    if (*p++ == '\0') return 1;  /* Safe. */
  }
}

An even better solution is running the extractor in an isolated sandbox (jail), thus protecting against malicious input and all kinds of software bugs.

2016-11-08

How to fix Python SSL errors when downloading https pages

This blog post explains how to fix Python SSL errors when downloading web pages using the https:// protocol in Python (e.g. by using the urllib, urllib2, httplib or requests. This blog post has been written because many other online sources haven't given direct and useful advice on how to fix the errors below.

How to fix SSL23_GET_SERVER_HELLO unknown protocol

This error looks like (possibly with a line number different from 504):
    self._sslobj.do_handshake()
SSLError: [Errno 1] _ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol

The fix is to upgrade your Python:

  • If you are using Python 2, upgrade to at least 2.7.7. It's recommended to upgrade to the latest (currently 2.7.11) though, or at least 2.7.9 (which has backported the ssl module (including the ssl.SSLContext customizations from 3.4). I have tested that the error above disappears when upgrading from 2.7.6 to 2.7.7. If you can't easily upgrade the Python 2 on the target system, you may want to try StaticPython on Linux (the stacklessco2.7-static and stacklessxx2.7-static binaries have OpenSSL and recent enough Python) or PyRun on Linux, macOS, FreeBSD and other Unix systems.
  • If you are using Python 3, upgrade to at least 3.4.3. It's recommended to upgrade to the latest (3.5.2) though.
  • If you are unable to upgrade from Python 2.x, try this workaround, it works in some cases (e.g. on Ubuntu 10.04) and on some websites:
    import ssl
    from functools import partial
    ssl.wrap_socket = partial(ssl.wrap_socket, ssl_version=ssl.PROTOCOL_TLSv1)

    There is a similar workaround for ssl.sslwrap_simple which also affects socket.ssl.

  • If you are unable to upgrade from Python 2.6.x, 3.2.x or 3.3.x, use backports.ssl.
  • If you are unable to upgrade from Python 1.x — 2.5 or 3.0.x &mdash 3.1.x, then probably there is no easy fix for you.

Typically it's not necessary to upgrade your OpenSSL library just to fix this error, ancient versions such as OpenSSL 0.9.8k (released on 2009-03-25) also work if Python is upgraded. The latest release from the 0.9.8 series (currently 0.9.8zh) or from the 1.0 series or from the 1.1 series should all work. But if you have an easy option to upgrade, then upgrade to at least the latest LTS (long-term-support) version (currently 1.0.2j).

How to fix SSL CERTIFICATE_VERIFY_FAILED

This error looks like (possibly with a line number different from 509):
    self._sslobj.do_handshake()
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

Server certificate verification by default has been introduced to Python recently (in 2.7.9). This protects against man-in-the-middle attacks, and it makes the client sure that the server is indeed who it claims to be.

As a quick (and insecure) fix, you can turn certificate verification off, by at least one of these:

  • Set the PYTHONHTTPSVERIFY environment variable to 0 before the ssl module is loaded, e.g. run export PYTHONHTTPSVERIFY=0 before you start the Python script.
  • (alternatively) Add this to your code before doing the https:// request (it affects all requests from now it):
    import os, ssl
    if (not os.environ.get('PYTHONHTTPSVERIFY', '') and
        getattr(ssl, '_create_unverified_context', None)):
      ssl._create_default_https_context = ssl._create_unverified_context

The proper, secure fix though is to install the latest root certificates to your computer to a directory where the OpenSSL library used by Python finds it. Your operating system may be able to do it conveniently for you, for example on Ubuntu 14.04, running this usually fixes it sudo apt-get update && sudo apt-get install ca-certificates).

2016-07-15

How to compile Lepton (JPEG lossless recompressor by Dropbox) without autotools

This blog post explains how to compile Lepton, the recently released JPEG lossless recompressor by Dropbox without autotools on Linux.

You'll need a fairly recent C++ compiler with the development libraries installed. g++-4.4 is too old, g++-4.8 is good enough.

Compile with the following command (without the leading $):

$ git clone https://github.com/dropbox/lepton
$ cd lepton
$ # Optional: git reset --hard e3f6c46502d958aaba17fbe7c218f8ce2b8b3f48
$ g++ -std=c++0x -W -Wall -Wextra -Wno-unused-parameter -Wno-write-strings \
    -msse4.1 \
    -o lepton \
    -fno-exceptions -fno-rtti \
    -s -O2 \
    -DGIT_REVISION=\"fake\" \
    -I./src/vp8/decoder \
    -I./src/vp8/encoder \
    -I./src/vp8/model \  
    -I./src/vp8/util \ 
    dependencies/md5/md5.c \
    src/io/MemMgrAllocator.cc \
    src/io/MemReadWriter.cc \  
    src/io/Seccomp.cc \
    src/io/Zlib0.cc \  
    src/io/ZlibCompression.cc \
    src/io/ioutil.cc \
    src/lepton/bitops.cc \
    src/lepton/fork_serve.cc \
    src/lepton/idct.cc \
    src/lepton/jpgcoder.cc \
    src/lepton/lepton_codec.cc \
    src/lepton/recoder.cc \
    src/lepton/simple_decoder.cc \
    src/lepton/simple_encoder.cc \
    src/lepton/socket_serve.cc \  
    src/lepton/thread_handoff.cc \
    src/lepton/uncompressed_components.cc \
    src/lepton/validation.cc \
    src/lepton/vp8_decoder.cc \
    src/lepton/vp8_encoder.cc \
    src/vp8/decoder/boolreader.cc \
    src/vp8/decoder/decoder.cc \   
    src/vp8/encoder/boolwriter.cc \
    src/vp8/encoder/encoder.cc \   
    src/vp8/model/JpegArithmeticCoder.cc \
    src/vp8/model/model.cc \
    src/vp8/model/numeric.cc \
    src/vp8/util/billing.cc \ 
    src/vp8/util/debug.cc \  
    src/vp8/util/generic_worker.cc \
    src/vp8/util/memory.cc \
    -lz \
    -lpthread \
;

$ ./lepton --help

It works both on i386 (force it with -m32) and amd64 (force it with -m64) architectures. Other architectures are not supported, because Lepton uses the SSE4.1 CPU instructions.

It also works with -std=c++11, but it doesn't work with -std=c++98.

Lepton also has a copy of zlib embedded, you can compile it with gcc instead of g++, and use it instead of -lz.

These instructions were tested and found working with the following version of Lepton:

$ git rev-parse HEAD
# e3f6c46502d958aaba17fbe7c218f8ce2b8b3f48

2016-06-20

How to back up original photo files as is in Google

This blog post explains how to back up photos on Google Photos and Google Drive as is, keeping the original images files, bit-by-bit identical, without any scaling or reencoding.

TL;DR If you want to keep the original image files, upload the photos to Google Drive, which keeps the original files (bit-by-bit identical as uploaded), and their size counts against your Google storage quota (see your usage). Don't upload any image file to Google Photos.

TL;DR If you want unlimited image uploads with the option of downloading the original image file (bit-by-bit identical), consider options other than Google Photos (such as Flickr and Deviantart).

On Google Photos you can upload some images for free (i.e. those images don't count against your Google storage quota). This is the most important advantage for uploading to Google Photos (rather than Google Drive). But there are some important caveats:

  • You need to decide before uploading if you want free (it's called high quality) or not (original). Select it in the Google Photos settings. This setting won't effect photos uploaded from your Android devices by the photo backup app.
  • If you decide non-free (original), future uploads will be counted against your quota, no matter the size, the file format or the quality. That is, even small, low-quality JPEGs will count against your quota.
  • If you choose free and you upload a PNG file of at most 16 megapixels, the original file is kept, and you'd be able to download it later.
  • If you choose free and you upload a PNG file of more than 16 megapixels, then it will be scaled down and reencoded.
  • If you choose free and you upload a JPEG file, then the photo gets scaled down to 16 megapixels (no change if already small enough), and then reencoded with a quality loss (which is small enough so that most humans don't notice), removes or rearranges some metadata (e.g. EXIF), and only the scaled and reencoded JPEG file is available for download.
  • Google Photos does deduplication of your images. This has an unintended consequence. If you upload a photo 3 times to 3 different album, and you move the photo to the trash, it will be removed from all 3 albums. There is no way to move some photos in an album to the trash without affecting other albums.
  • Google Photos does deduplication even across qualities. Thus if you upload an image as original first, and the upload it again as high quality, the high quality version will be ignored, and the original version will be present in both albums. It's also true the other way round: if you upload high quality first, then subsequent original uploads of the same image will be ignored.
  • Even with non-free (original), Google doesn't remember the original file name, as uploaded: it converts e.g. the .JPG extension to .jpg (lower case).
  • Immediately after the upload, the image info page shows incorrect information, and the Download link serves an incorrect (lower resolution) version of the image. For example, when I uploaded a new 1.1 MB JPEG file in high quality mode, the image info was showing 1.1 MB, but when downloading it, it became a 340 kB JPEG file. After reloading the image page, the image info was showing 550 KB, and downloading it yielded a file of that size. This makes experimenting with image upload sizes confusing.
  • This is as of 2016-06-20, the behavior of Google Photos may change in the future.

Because of these caveats and unexpected behavior, to avoid quality loss, my recommendation is not to use Google Photos for backing up JPEG image files.

2016-05-30

How to install Hungarian spell checking and hyphenation for LibreOffice and OpenOffice on Linux

This blog post explains how to install Hungarian spell checking and hyphenation for LibreOffice and OpenOffice on Linux. The instructions were tested on Ubuntu Trusty, but they should work well on other Linux distributions as well with small modifications.

  1. The installation consists of downloading the right files, copying them to the right location, and restarting the LibreOffice and/or OpenOffice.
  2. Install LibreOffice or OpenOffice with your favorite package manager if not already installed.
  3. Download the files from http://magyarispell.sf.net/ (no need to click now) by running this commands (without the leading $) in a terminal window:
    $ wget -O /tmp/hu_HU-1.6.1.tar.gz https://sourceforge.net/projects/magyarispell/files/Magyar%20Ispell/1.6.1/hu_HU-1.6.1.tar.gz/download #
    $ wget -O /tmp/huhyphn_v20110815_LibO.tar.gz https://sourceforge.net/projects/magyarispell/files/OOo%20Huhyphn/v20110815-0.1/huhyphn_v20110815_LibO.tar.gz/download #
    

    The commands are long, please make sure to copy-paste the entire line (both ending with #),

    Be patient, you may have to wait for 30 seconds for each download.

  4. Run these commands to copy the files to the right location:
    $ (cd /tmp && tar xzvf ~/Downloads/hu_HU-1.6.1.tar.gz)
    $ (cd /tmp/hu_HU-1.6.1 && sudo cp hu_HU.aff hu_HU.dic /usr/share/hunspell/)
    $ (cd /tmp/ && tar xzvf ~/Downloads/huhyphn_v20110815_LibO.tar.gz)
    $ (cd huhyphn_v20110815_LibO && sudo cp hyph_hu_HU.dic /usr/share/hyphen/)
  5. If it's already running, exit from LibreOffice and OpenOffice.
  6. Start LibreOffice or OpenOffice, type asztall, select it, change the language to Hungarian in Format / Character. Now the text should be underlined with read, and when you right-click, the suggestion asztal should be offered.

2016-04-06

How to disable (reject) any root password on Debian and Ubuntu

This blog post explains how to disable (reject) any root password on Debian and Ubuntu, thus rejecting login attempts as root. Becoming root with sudo (by typing the calling user's password) or ssh (using a public key) remains possible.

TL;DR Run as root: passwd -d -l root

How to become root if password-based root logins are (or will be) disabled?

Before disabling password-based root logins, make sure you have other ways to become root. One possible way is running sudo (without arguments) from a non-root user. To make this work, first you have to install sudo by running (without the leading #) as root:

# apt-get install sudo

as root. (Ubuntu systems come with sudo preinstalled, Debian systems don't have it by default.) Then run as root, replacing MyUser with your non-root login name:

# adduser MyUser sudo

After running this, running sudo as that user will ask for the user's password (not the root password), and when typed correctly, you will get a root shell, and will be able to run commands as root. (Type exit to exit from the root shell.)

An alternative to sudo for becoming root without a password is running ssh root@localhost. For that you need a properly configured sshd (with PermitRootLogin without-password or PermitRootLogin yes in /etc/ssh/sshd_config), creating an SSH key pair and appending the public key to /root/.ssh/authorized_keys. If you need help setting this up or using it, then please ask a Unix or Linux guru friend.

How to disable password-based root logins

To disable (reject) any root password on Debian and Ubuntu, run this (without the leading #) as root:

# passwd -d -l root

This effectively changes the 2nd field line starting with root: in /etc/shadow to !, thus the line will start with root:!:, making login, su, ssh (when using password authentication, i.e. no public key) reject login attempts as root. Typically the password wouldn't even be asked for, but if it is, any password would be rejected. An alternative to the command above is editing the /etc/shadow file manually (as root), and adding the !. Also the -d flag is not necessary, without it the password hash is still kept in /etc/shadow (but a ! is prepended to disable it).

Ubuntu comes with this default (root:!: in /etc/shadow), Debian doesn't.

If you want to disable the root password in ssh only (and allow password-based root logins in login and su), then instead of running the command above, add (or change) the line

PermitRootLogin without-password

to /etc/ssh/sshd_config (as root), and then run (as root):

# /etc/init.d/ssh restart

Please note that there are ways to permit a root login without a password (or with an empty password), but this is very bad security practice, so this blog post doesn't explain how to do it.

How to enable password-based root logins

To enable password-based root logins again, run this as root:

# passwd root

It will ask you to specify the new password for root.

2016-02-22

keybase.txt

Please disregard this post, it's a cryptographic proof checked by keybase.io/pts.

==================================================================
https://keybase.io/pts
--------------------------------------------------------------------

I hereby claim:

  * I am an admin of https://ptspts.blogspot.com
  * I am pts (https://keybase.io/pts) on keybase.
  * I have a public key with fingerprint 537D 2E8D 6FAB 3265 9A1F  8767 33BB 974C 2FE0 93F2

To do so, I am signing this object:

{
    "body": {
        "key": {
            "eldest_kid": "01110f1fe32d1d6263e9674e11d7249ac66f46d9f8c54c896c16205dcf68203493b90a",
            "fingerprint": "537d2e8d6fab32659a1f876733bb974c2fe093f2",
            "host": "keybase.io",
            "key_id": "33bb974c2fe093f2",
            "kid": "01110f1fe32d1d6263e9674e11d7249ac66f46d9f8c54c896c16205dcf68203493b90a",
            "uid": "51f5f6cc0a558175f85f29dc164fe900",
            "username": "pts"
        },
        "service": {
            "hostname": "ptspts.blogspot.com",
            "protocol": "https:"
        },
        "type": "web_service_binding",
        "version": 1
    },
    "ctime": 1456178821,
    "expire_in": 157680000,
    "prev": "b14bb983d61e5b2097e313d7b246fa7e17030b598d6ae6a7a1545f36b5ef75c2",
    "seqno": 27,
    "tag": "signature"
}

which yields the signature:

-----BEGIN PGP MESSAGE-----
Version: GnuPG v1

owGtUj1rVEEUfSYaMSCYxk6L6dRlMx9vvhZsLLQRtBAbi2U+7mwe2bx5vveyMYRt
JYWFIGil4t9QG20sUu4fEG20CtrYWDizKDaWDgPDvXPOmTP33sdnV4uVDfbm2c3N
j0++nzj6aou7Rw8PD5CNfh+NDtA2LA+Yeuj68Xbl0QhhQggOJACjnnhBBQMtZAmE
eElLbZwQoRReB+V46ZQWjgiKuXdBKIpZqZnV2KABClU9gbZpq7pPspxJT0F5EYxl
VHBtSFBSSMas1bJ0NADWLNBE3IpdZiRz1nQwrGLKpWC8tPcP/H/2vbuU4yTwIJzD
hnNFJA+KB6p9IpUBNMYZ2EFbmx1I6Kbv0HyAUmJWOcg1zZ/4e5n20E7jpGtiP3Rx
J7GbNvbRxWkCbPV9042yQL/fZMYe2PFvrbGtap8qmRgzaLsq1mhEEtL1VRYnJRdE
KkXJAMGDpmphXGUEl0LhtPI7MEuSlpTW3tOKeUGAW4q1BEaYl5aWqScSiMQMW65T
iwwIIw3hJQ9MWA5BcpcL3cH9OqIRlcmomSTRrprUpt9tAc3XD6+fLDZWirVTK3nG
ivUz5/4MXnereH7xx+3zcfH05+LT26ub7+68evR+/q14celaWDv+fGXR3Fg9Xny5
fPrlh9cXfgE=
=N5V9
-----END PGP MESSAGE-----

And finally, I am proving ownership of this host by posting or
appending to this document.

View my publicly-auditable identity here: https://keybase.io/pts

==================================================================

micropython for Linux i386, statically linked

This blog post is to announce statically linked binaries for Linux i386 of MicroPython.

MicroPython (Python for microcontrollers) is an open source reimplementation (see sources on GitHub) of the Python 3.4 language for microcontrollers with very little RAM (as low as 60 kB). The CPython interpreter is not used at all, MicroPython has a completely separate implementation in C, supporting the full Python 3.4 language syntax, but with a much smaller standard library (i.e. much fewer modules and classes, and existing classes have fewer methods). Unicode strings (i.e. the str class) are supported though.

MicroPython can be cross-compiled to many different platforms, including multiple microcontrollers (including the ESP8266 ($5) and the pyboard ($40)) and to Unix systems (including Linux). The micropython binary seems to be 17.56 times smaller than the python binary for Linux i386 (both binaries were statically linked against uClibc using https://github.com/pts/pts-clang-xstatic/blob/master/README.pts-xstatic.txt, and optionally compressed with UPX). The detailed file sizes are:

The script for recompiling MicroPython for Linux i386, statically linked is also open source.

Please note that neither StaticPython nor MicroPython open any external files (such as .so or .py or .zip) when starting up, all the Python interpreter and the Python standard library (and the libc as well) are statically linked in to the binary executable.

2016-01-06

How to extract comments from a JPEG file

This blog post explains how to extract comments from a JPEG file. Each JPEG file consists of segments. Each segment describes parts of the image data or metadata. The comments are in segments with marker COM (0xfe), there can be any number of them, anywhere (usually before the SOS segment) in the file.

Use the rdjpgcom command-line tool to extract comments. The tool is part of libjpeg, and on Ubuntu and Debian systems it can be installed with (don't type the leading $):

$ sudo apt-get install libjpeg-progs

Once installed, use it like this to print all comments in the JPEG file, with a terminating newline added to each:

$ rdjpgcom file.jpg

If the file doesn't have any comment, the output of rdjpgcom is empty. Here is how to add comments:

$ wrjpgcom -c 'COMfoo' com0.jpg >com1.jpg
$ wrjpgcom -c 'COMbar' com1.jpg >com2.jpg
After adding the comments, it will look like this:
$ rdjpgcom com2.jpg
COMfoo
COMbar

If you also want to see the unprintable characters (unsafe on a terminal), pass the -raw flag:

$ rdjpgcom -raw file.jpg

If you need a library, use the following C code, which is a minimalistic reimplementation of rdjpgcom -a:

/*
 * getjpegcom.c: Get comments from JPEG files.
 * by pts@fazekas.hu at Wed Jan  6 20:48:07 CET 2016
 *
 *   $ gcc -W -Wall -Wextra -Werror -s -O2 -o getjpegcom getjpegcom.c &&
 *   $ ./getjpegcom <file.jpg
 */

#include <stdio.h>
#if defined(MSDOS) || defined(WIN32)
#include <fcntl.h>  /* setmode. */
#endif

/* Get and print all comments in a JPEG file. Comments are written to of,
 * with a newline appended as terminator.
 *
 * Returns 0 on success, or a negative number on error.
 */
static int get_jpeg_comments(FILE *f, FILE *of) {
  int c, m;
  unsigned ss;
#if defined(MSDOS) || defined(WIN32)
  setmode(fileno(f), O_BINARY);
  setmode(fileno(of), O_BINARY);
#endif
  if (ferror(f)) return -1;
  /* A typical JPEG file has markers in these order:
   *   d8 e0_JFIF e1 e1 e2 db db fe fe c0 c4 c4 c4 c4 da d9.
   *   The first fe marker (COM, comment) was near offset 30000.
   * A typical JPEG file after filtering through jpegtran:
   *   d8 e0_JFIF fe fe db db c0 c4 c4 c4 c4 da d9.
   *   The first fe marker (COM, comment) was at offset 20.
   */
  if ((c = getc(f)) < 0) return -2;  /* Truncated (empty). */
  if (c != 0xff) return -3;
  if ((c = getc(f)) < 0) return -2;  /* Truncated. */
  if (c != 0xd8) return -3;  /* Not a JPEG file, SOI expected. */
  for (;;) {
    /* printf("@%ld\n", ftell(f)); */
    if ((c = getc(f)) < 0) return -2;  /* Truncated. */
    if (c != 0xff) return -3;  /* Not a JPEG file, marker expected. */
    if ((m = getc(f)) < 0) return -2;  /* Truncated. */
    while (m == 0xff) {  /* Padding. */
      if ((m = getc(f)) < 0) return -2;  /* Truncated. */
    }
    if (m == 0xd8) return -4;  /* SOI unexpected. */
    if (m == 0xd9) break;  /* EOI. */
    if (m == 0xda) break;  /* SOS. Would need special escaping to process. */
    /* printf("MARKER 0x%02x\n", m); */
    if ((c = getc(f)) < 0) return -2;  /* Truncated. */
    ss = (c + 0U) << 8;
    if ((c = getc(f)) < 0) return -2;  /* Truncated. */
    ss += c;
    if (ss < 2) return -5;  /* Segment too short. */
    for (ss -= 2; ss > 0; --ss) {
      if ((c = getc(f)) < 0) return -2;  /* Truncated. */
      if (m == 0xfe) putc(c, of);  /* Emit comment char. */
    }
    if (m == 0xfe) putc('\n', of);  /* End of comment. */
  }
  return 0;
}

int main(int argc, char **argv) {
  (void)argc; (void)argv;
  return -get_jpeg_comments(stdin, stdout);
}

Here is how to compile and run it:

$ gcc -W -Wall -Wextra -Werror -s -O2 -o getjpegcom getjpegcom.c
$ ./getjpegcom com2.jpg
COMfoo
COMbar

2015-12-05

Announcing flickrurlget: Flickr photo downloader from command-line in batch

This blog post is an announcement of flickrurlget, a command-line tool for Unix, written in Python 2.x that can be used to download photos from Flickr in batch. flickrurlget itself doesn't download photos, but it generates a list of raw photo URLs which can be downloaded with a download manager (even with `wget -i').

Download and install from source.

There are many photo download tools for Flickr (including command-line tools, GUI tools, Firefox extensions and Chrome extensions), none of which I tried having the feature set of flickrurlget:

  • Can get highest-resolution (if possible, original) image URLs in:
    • a Flickr photo
    • a photostream of a Flickr user (i.e. all photos of the user)
    • the favorite photos of a Flickr user
    • an album (photoset) of a Flickr user
    • a Flickr group
    • a gallery of a Flickr user
  • Batch operation: started from the command-line with a bunch of Flickr page URLs, and it discovers all the photos in there without any interaction.
  • Can get anybody's photos (not only your own).
  • Can get non-public and adult (non-safe) photos as well (after loggig in).
  • Doesn't need a GUI or a web browser for most of the operations.

There is the command-line tool flickr_download (source code and PyPi projet page), which is also written in python. Here is how flickrurlget is better than flickr_download:

  • Takes Flickr page URLs and figures out all parameters automatically.
  • Can download groups and galleries (in addition to user photostreams and photosets) as well.
  • Has user ID, group ID and photoset ID discovery: the user doesn't have to do manual ID lookups before running the tool.
  • Uses only standard Python modules, no other dependencies.
  • Better compatibility with old Python: works with 2.7, 2.6, 2.5 and 2.4; not only 2.7.

Flickr has a nice, full-featured REST API (works with both XML and JSON), its documentation is available here. flickrurlget uses this API. It can also optionally OAuth to log the user in.

2015-11-27

How to compute the intersection of two sorted lists (in Python)

This blog post explains how to compute the sorted intersection of two sorted lists, and it shows a fast Python implementation. The time complexity is O(min(n + m, n · log(m)), where n is the minumum of the list lengths, and m is the maximum of the list lengths.

The first idea is to take the first element of both lists, and, if they are different, discard the smaller one of the two. (The smaller one can't be in the intersection because it's smaller than all elements of the other list.) If the first elements were equal, then emit the value as part of the intersection, and discard both first elements. Continue this until one of the lists run out. If discarding is implemented by incrementing index variables, this method finishes in O(n + m) time, because each iteration discards at least one element.

We can improve on the first idea by noticing that it's possible to discard multiple elements in the beginning (i.e. when the two first elements are different): it's possible to discard all elements which are smaller than the larger one of the two first elements. Depending on the input, this can be a lot of elements, for example if all elements of the first list are smaller than all elements of the second list, then it will discard the entire first list in one step. In the general case, binary search can be used figure out how many elements to discard. However, binary search takes logarithmic time, so the total execution time is O(m · log(m)), which can be faster or slower than the O(n + m) of the previous solution. In fact, by more careful analysis of the number of runs (blocks which are discarded), it's possible to show that the execution time is just O(n · log(m)), but that can still be slower than the previous solution.

It's possible to combine the two solutions: estimate the execution time of the 2nd solution, and if the estimate is smaller than n + m, execute the 2nd solution, otherwise execute the first solution. Please note that the estimate also takes into account the constants (not only big-O). The resulting Python (≥2.4 and 3.x) code looks like:

def intersect_sorted(a1, a2):
  """Yields the intersection of sorted lists a1 and a2, without deduplication.

  Execution time is O(min(lo + hi, lo * log(hi))), where lo == min(len(a1),
  len(a2)) and hi == max(len(a1), len(a2)). It can be faster depending on
  the data.
  """
  import bisect, math
  s1, s2 = len(a1), len(a2)
  i1 = i2 = 0
  if s1 and s1 + s2 > min(s1, s2) * math.log(max(s1, s2)) * 1.4426950408889634:
    bi = bisect.bisect_left
    while i1 < s1 and i2 < s2:
      v1, v2 = a1[i1], a2[i2]
      if v1 == v2:
        yield v1
        i1 += 1
        i2 += 1
      elif v1 < v2:
        i1 = bi(a1, v2, i1)
      else:
        i2 = bi(a2, v1, i2)
  else:  # The linear solution is faster.
    while i1 < s1 and i2 < s2:
      v1, v2 = a1[i1], a2[i2]
      if v1 == v2:
        yield v1
        i1 += 1
        i2 += 1
      elif v1 < v2:
        i1 += 1
      else:
        i2 += 1

The numeric constant 1.4426950408889634 in the code above is 1/math.log(2).

The code with some tests and with support for merging multiple sequences is available on GitHub here.

Related questions on StackOverflow: this and this.

2015-03-20

How to use rsync over USB on Android with adb

This blog post explains how to copy files using rsync between the computer running Unix (typically Linux or Mac OS X) and the mobile device running Android, using an USB cable. It's not necessary to root the device. It's not necessary to install any app.

If you want to copy over wifi rather than USB, then please use the app rsync backup for Android (rsync4android) instead. The rest of this tutorial describes a method which needs the computer and the mobile device connected with a USB data cable.

Enable USB debugging on the device. If you don't know how to do it in Settings, then find a tutorial online.

Install adb (Android Debug Bridge) to the Unix system. For example, on Ubuntu: sudo apt-get install android-tools-adb

Connect the mobile device to the computer. Run: adb shell id on the computer. (All commands should be run on the computer unless asked otherwise.) It should display something like:

$ adb shell id
uid=2000(shell) gid=2000(shell) groups=1004(input),... context=u:r:shell:s0

If it doesn't work, you may have to enable Settings / Developer options / USB debugging on the device, then reconnect, then click OK on the dialog box in the device, then rerun adb shell id on the computer.

Please note that on Cyanogenmod rsync is installed by default and it is on the $PATH, so you can skip some of the steps below. (If you don't know what to skip, just do everything anyway.)

Download the rsync binary: wget -O rsync.bin http://github.com/pts/rsyncbin/raw/master/rsync.rsync4android

Copy the rsync binary to the device: adb push rsync.bin /data/local/tmp/rsync

Make the rsync binary on the device executable: adb shell chmod 755 /data/local/tmp/rsync

Make sure you have a backup copy of the binary in a more permanent directory: adb shell cp /data/local/tmp/rsync /sdcard/rsync.bin

Get the rsync version by running adb shell /data/local/tmp/rsync --version . Typical output:

$ adb shell /data/local/tmp/rsync --version
rsync  version 3.1.1  protocol version 31
Copyright (C) 1996-2014 by Andrew Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
Capabilities:
    64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints,
    no socketpairs, hardlinks, symlinks, no IPv6, batchfiles, inplace,
    append, no ACLs, no xattrs, no iconv, no symtimes, no prealloc

rsync comes with ABSOLUTELY NO WARRANTY.  This is free software, and you
are welcome to redistribute it under certain conditions.  See the GNU
General Public Licence for details.

Run rsync --version . If you get something like

rsync: command not found

, then rsync isn't installed to the computer. On Ubuntu you can install it with sudo apt-get install rsync

Create an rsyncd.conf config file and install it to the device by running: adb shell 'exec >/sdcard/rsyncd.conf && echo address = 127.0.0.1 && echo port = 1873 && echo "[root]" && echo path = / && echo use chroot = false && echo read only = false' . This must finish without displaying any (error) message.

Start the rsync daemon in the device by running: adb shell /data/local/tmp/rsync --daemon --no-detach --config=/sdcard/rsyncd.conf --log-file=/proc/self/fd/2 . It must start up with just a single message:

2015/03/19 15:58:14 [5814] rsyncd version 3.1.1 starting, listening on port 1873

Keep it running for the duration of the copies (below), and continue working in another terminal window. Or press Ctrl-C to exit right now, and restart it (so it will start running in the background on the device) like this: adb shell '/data/local/tmp/rsync --daemon --config=/sdcard/rsyncd.conf --log-file=/data/local/tmp/foo &'

Start port forwarding by running: adb forward tcp:6010 tcp:1873

Now you can start copying files with rsync (back and forth). An example command: rsync -av --progress --stats rsync://localhost:6010/root/sdcard/Ringtones .

You may find the --size-only flag useful if rsync is copying the same files over and over again.

You may want to copy from or to /storage/sdcard1 instead of /sdcard on the device.

Some newer storage devices have the exfat filesystem (older ones typically have fat or some emulation of it, and that's just fine). Writing to exfat drives rsync -av crazy: it reports steady progress with lots of Operation not permitted errors, but it actually doesn't create any files. This applies both rsync running on the Linux computer and rsync running on the device. A solution is to replace rsync -av with rsync -vrtlD , and restart the copy.