diff --git a/srcpkgs/rclone/files/rclone.1 b/srcpkgs/rclone/files/rclone.1 new file mode 100644 index 00000000000..4e24d75a385 --- /dev/null +++ b/srcpkgs/rclone/files/rclone.1 @@ -0,0 +1,1946 @@ +.\"t +.TH "rclone" "1" "Sep 15, 2015" "User Manual" "" +.SH Rclone +.PP +[IMAGE: Logo (http://rclone.org/img/rclone-120x120.png)] (http://rclone.org/) +.PP +Rclone is a command line program to sync files and directories to and +from +.IP \[bu] 2 +Google Drive +.IP \[bu] 2 +Amazon S3 +.IP \[bu] 2 +Openstack Swift / Rackspace cloud files / Memset Memstore +.IP \[bu] 2 +Dropbox +.IP \[bu] 2 +Google Cloud Storage +.IP \[bu] 2 +Amazon Cloud Drive +.IP \[bu] 2 +The local filesystem +.PP +Features +.IP \[bu] 2 +MD5SUMs checked at all times for file integrity +.IP \[bu] 2 +Timestamps preserved on files +.IP \[bu] 2 +Partial syncs supported on a whole file basis +.IP \[bu] 2 +Copy mode to just copy new/changed files +.IP \[bu] 2 +Sync mode to make a directory identical +.IP \[bu] 2 +Check mode to check all MD5SUMs +.IP \[bu] 2 +Can sync to and from network, eg two different Drive accounts +.PP +Links +.IP \[bu] 2 +Home page (http://rclone.org/) +.IP \[bu] 2 +Github project page for source and bug +tracker (http://github.com/ncw/rclone) +.IP \[bu] 2 +Google+ page +.RS 2 +.RE +.IP \[bu] 2 +Downloads (http://rclone.org/downloads/) +.SS Install +.PP +Rclone is a Go program and comes as a single binary file. +.PP +Download (http://rclone.org/downloads/) the relevant binary. +.PP +Or alternatively if you have Go installed use +.IP +.nf +\f[C] +go\ get\ github.com/ncw/rclone +\f[] +.fi +.PP +and this will build the binary in \f[C]$GOPATH/bin\f[]. +If you have built rclone before then you will want to update its +dependencies first with this (remove \f[C]\-f\f[] if using go < 1.4) +.IP +.nf +\f[C] +go\ get\ \-u\ \-v\ \-f\ github.com/ncw/rclone/... +\f[] +.fi +.PP +See the Usage section (http://rclone.org/docs/) of the docs for how to +use rclone, or run \f[C]rclone\ \-h\f[]. +.SS linux binary downloaded files install example +.IP +.nf +\f[C] +unzip\ rclone\-v1.17\-linux\-amd64.zip +cd\ rclone\-v1.17\-linux\-amd64 +#copy\ binary\ file +sudo\ cp\ rclone\ /usr/sbin/ +sudo\ chown\ root:root\ /usr/sbin/rclone +sudo\ chmod\ 755\ /usr/sbin/rclone +#install\ manpage +sudo\ mkdir\ \-p\ /usr/local/share/man/man1 +sudo\ cp\ rclone.1\ /usr/local/share/man/man1/ +sudo\ mandb +\f[] +.fi +.SS Configure +.PP +First you\[aq]ll need to configure rclone. +As the object storage systems have quite complicated authentication +these are kept in a config file \f[C]\&.rclone.conf\f[] in your home +directory by default. +(You can use the \f[C]\-\-config\f[] option to choose a different config +file.) +.PP +The easiest way to make the config is to run rclone with the config +option: +.IP +.nf +\f[C] +rclone\ config +\f[] +.fi +.PP +See the following for detailed instructions for +.IP \[bu] 2 +Google drive (http://rclone.org/drive/) +.IP \[bu] 2 +Amazon S3 (http://rclone.org/s3/) +.IP \[bu] 2 +Swift / Rackspace Cloudfiles / Memset +Memstore (http://rclone.org/swift/) +.IP \[bu] 2 +Dropbox (http://rclone.org/dropbox/) +.IP \[bu] 2 +Google Cloud Storage (http://rclone.org/googlecloudstorage/) +.IP \[bu] 2 +Local filesystem (http://rclone.org/local/) +.SS Usage +.PP +Rclone syncs a directory tree from one storage system to another. +.PP +Its syntax is like this +.IP +.nf +\f[C] +Syntax:\ [options]\ subcommand\ \ +\f[] +.fi +.PP +Source and destination paths are specified by the name you gave the +storage system in the config file then the sub path, eg "drive:myfolder" +to look at "myfolder" in Google drive. +.PP +You can define as many storage paths as you like in the config file. +.SS Subcommands +.SS rclone copy source:path dest:path +.PP +Copy the source to the destination. +Doesn\[aq]t transfer unchanged files, testing by size and modification +time or MD5SUM. +Doesn\[aq]t delete files from the destination. +.SS rclone sync source:path dest:path +.PP +Sync the source to the destination, changing the destination only. +Doesn\[aq]t transfer unchanged files, testing by size and modification +time or MD5SUM. +Destination is updated to match source, including deleting files if +necessary. +Since this can cause data loss, test first with the +\f[C]\-\-dry\-run\f[] flag. +.SS rclone ls [remote:path] +.PP +List all the objects in the the path with size and path. +.SS rclone lsd [remote:path] +.PP +List all directories/containers/buckets in the the path. +.SS rclone lsl [remote:path] +.PP +List all the objects in the the path with modification time, size and +path. +.SS rclone md5sum [remote:path] +.PP +Produces an md5sum file for all the objects in the path. +This is in the same format as the standard md5sum tool produces. +.SS rclone mkdir remote:path +.PP +Make the path if it doesn\[aq]t already exist +.SS rclone rmdir remote:path +.PP +Remove the path. +Note that you can\[aq]t remove a path with objects in it, use purge for +that. +.SS rclone purge remote:path +.PP +Remove the path and all of its contents. +.SS rclone check source:path dest:path +.PP +Checks the files in the source and destination match. +It compares sizes and MD5SUMs and prints a report of files which +don\[aq]t match. +It doesn\[aq]t alter the source or destination. +.SS rclone config +.PP +Enter an interactive configuration session. +.SS rclone help +.PP +Prints help on rclone commands and options. +.SS Server Side Copy +.PP +Drive, S3, Dropbox, Swift and Google Cloud Storage support server side +copy. +.PP +This means if you want to copy one folder to another then rclone +won\[aq]t download all the files and re\-upload them; it will instruct +the server to copy them in place. +.PP +Eg +.IP +.nf +\f[C] +rclone\ copy\ s3:oldbucket\ s3:newbucket +\f[] +.fi +.PP +Will copy the contents of \f[C]oldbucket\f[] to \f[C]newbucket\f[] +without downloading and re\-uploading. +.PP +Remotes which don\[aq]t support server side copy (eg local) +\f[B]will\f[] download and re\-upload in this case. +.PP +Server side copies are used with \f[C]sync\f[] and \f[C]copy\f[] and +will be identified in the log when using the \f[C]\-v\f[] flag. +.PP +Server side copies will only be attempted if the remote names are the +same. +.PP +This can be used when scripting to make aged backups efficiently, eg +.IP +.nf +\f[C] +rclone\ sync\ remote:current\-backup\ remote:previous\-backup +rclone\ sync\ /path/to/files\ remote:current\-backup +\f[] +.fi +.SS Options +.PP +Rclone has a number of options to control its behaviour. +.PP +Options which use TIME use the go time parser. +A duration string is a possibly signed sequence of decimal numbers, each +with optional fraction and a unit suffix, such as "300ms", "\-1.5h" or +"2h45m". +Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". +.PP +Options which use SIZE use kByte by default. +However a suffix of \f[C]k\f[] for kBytes, \f[C]M\f[] for MBytes and +\f[C]G\f[] for GBytes may be used. +These are the binary units, eg 2\f[B]10, 2\f[]20, 2**30 respectively. +.SS \-\-bwlimit=SIZE +.PP +Bandwidth limit in kBytes/s, or use suffix k|M|G. +The default is \f[C]0\f[] which means to not limit bandwidth. +.PP +For example to limit bandwidth usage to 10 MBytes/s use +\f[C]\-\-bwlimit\ 10M\f[] +.PP +This only limits the bandwidth of the data transfer, it doesn\[aq]t +limit the bandwith of the directory listings etc. +.SS \-\-checkers=N +.PP +The number of checkers to run in parallel. +Checkers do the equality checking of files during a sync. +For some storage systems (eg s3, swift, dropbox) this can take a +significant amount of time so they are run in parallel. +.PP +The default is to run 8 checkers in parallel. +.SS \-c, \-\-checksum +.PP +Normally rclone will look at modification time and size of files to see +if they are equal. +If you set this flag then rclone will check MD5SUM and size to determine +if files are equal. +.PP +This is very useful when transferring between remotes which store the +MD5SUM on the object which include swift, s3, drive, and google cloud +storage. +.PP +Eg \f[C]rclone\ \-\-checksum\ sync\ s3:/bucket\ swift:/bucket\f[] would +run much quicker than without the \f[C]\-\-checksum\f[] flag. +.PP +When using this flag, rclone won\[aq]t update mtimes of remote files if +they are incorrect as it would normally. +.SS \-\-config=CONFIG_FILE +.PP +Specify the location of the rclone config file. +Normally this is in your home directory as a file called +\f[C]\&.rclone.conf\f[]. +If you run \f[C]rclone\ \-h\f[] and look at the help for the +\f[C]\-\-config\f[] option you will see where the default location is +for you. +Use this flag to override the config location, eg +\f[C]rclone\ \-\-config=".myconfig"\ .config\f[]. +.SS \-\-contimeout=TIME +.PP +Set the connection timeout. +This should be in go time format which looks like \f[C]5s\f[] for 5 +seconds, \f[C]10m\f[] for 10 minutes, or \f[C]3h30m\f[]. +.PP +The connection timeout is the amount of time rclone will wait for a +connection to go through to a remote object storage system. +It is \f[C]1m\f[] by default. +.SS \-n, \-\-dry\-run +.PP +Do a trial run with no permanent changes. +Use this in combination with the \f[C]\-v\f[] flag to see what rclone +would do without actually doing it. +Useful when setting up the \f[C]sync\f[] command. +.SS \-\-log\-file=FILE +.PP +Log all of rclone\[aq]s output to FILE. +This is not active by default. +This can be useful for tracking down problems with syncs in combination +with the \f[C]\-v\f[] flag. +.SS \-\-modify\-window=TIME +.PP +When checking whether a file has been modified, this is the maximum +allowed time difference that a file can have and still be considered +equivalent. +.PP +The default is \f[C]1ns\f[] unless this is overridden by a remote. +For example OS X only stores modification times to the nearest second so +if you are reading and writing to an OS X filing system this will be +\f[C]1s\f[] by default. +.PP +This command line flag allows you to override that computed default. +.SS \-q, \-\-quiet +.PP +Normally rclone outputs stats and a completion message. +If you set this flag it will make as little output as possible. +.SS \-\-size\-only +.PP +Normally rclone will look at modification time and size of files to see +if they are equal. +If you set this flag then rclone will check only the size. +.PP +This can be useful transferring files from dropbox which have been +modified by the desktop sync client which doesn\[aq]t set checksums of +modification times in the same way as rclone. +.PP +When using this flag, rclone won\[aq]t update mtimes of remote files if +they are incorrect as it would normally. +.SS \-\-stats=TIME +.PP +Rclone will print stats at regular intervals to show its progress. +.PP +This sets the interval. +.PP +The default is \f[C]1m\f[]. +Use 0 to disable. +.SS \-\-timeout=TIME +.PP +This sets the IO idle timeout. +If a transfer has started but then becomes idle for this long it is +considered broken and disconnected. +.PP +The default is \f[C]5m\f[]. +Set to 0 to disable. +.SS \-\-transfers=N +.PP +The number of file transfers to run in parallel. +It can sometimes be useful to set this to a smaller number if the remote +is giving a lot of timeouts or bigger if you have lots of bandwidth and +a fast remote. +.PP +The default is to run 4 file transfers in parallel. +.SS \-v, \-\-verbose +.PP +If you set this flag, rclone will become very verbose telling you about +every file it considers and transfers. +.PP +Very useful for debugging. +.SS \-V, \-\-version +.PP +Prints the version number +.SS Developer options +.PP +These options are useful when developing or debugging rclone. +There are also some more remote specific options which aren\[aq]t +documented here which are used for testing. +These start with remote name eg \f[C]\-\-drive\-test\-option\f[]. +.SS \-\-cpuprofile=FILE +.PP +Write cpu profile to file. +This can be analysed with \f[C]go\ tool\ pprof\f[]. +.SH Overview of cloud storage systems +.PP +Each cloud storage system is slighly different. +Rclone attempts to provide a unified interface to them, but some +underlying differences show through. +.SS Features +.PP +Here is an overview of the major features of each cloud storage system. +.PP +.TS +tab(@); +l c c c c. +T{ +Name +T}@T{ +MD5SUM +T}@T{ +ModTime +T}@T{ +Case Sensitive +T}@T{ +Duplicate Files +T} +_ +T{ +Google Drive +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T} +T{ +Amazon S3 +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T} +T{ +Openstack Swift +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T} +T{ +Dropbox +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T}@T{ +No +T} +T{ +Google Cloud Storage +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T} +T{ +Amazon Cloud Drive +T}@T{ +Yes +T}@T{ +No +T}@T{ +Yes +T}@T{ +No +T} +T{ +The local filesystem +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Depends +T}@T{ +No +T} +.TE +.SS MD5SUM +.PP +The cloud storage system supports MD5SUMs of the objects. +This is used if available when transferring data as an integrity check +and can be specifically used with the \f[C]\-\-checksum\f[] flag in +syncs and in the \f[C]check\f[] command. +.SS ModTime +.PP +The cloud storage system supports setting modification times on objects. +If it does then this enables a using the modification times as part of +the sync. +If not then only the size will be checked by default, though the MD5SUM +can be checked with the \f[C]\-\-checksum\f[] flag. +.PP +All cloud storage systems support some kind of date on the object and +these will be set when transferring from the cloud storage system. +.SS Case Sensitive +.PP +If a cloud storage systems is case sensitive then it is possible to have +two files which differ only in case, eg \f[C]file.txt\f[] and +\f[C]FILE.txt\f[]. +If a cloud storage system is case insensitive then that isn\[aq]t +possible. +.PP +This can cause problems when syncing between a case insensitive system +and a case sensitive system. +The symptom of this is that no matter how many times you run the sync it +never completes fully. +.PP +The local filesystem may or may not be case sensitive depending on OS. +.IP \[bu] 2 +Windows \- usuall case insensitive +.IP \[bu] 2 +OSX \- usually case insensitive, though it is possible to format case +sensitive +.IP \[bu] 2 +Linux \- usually case sensitive, but there are case insensitive file +systems (eg FAT formatted USB keys) +.PP +Most of the time this doesn\[aq]t cause any problems as people tend to +avoid files whose name differs only by case even on case sensitive +systems. +.SS Duplicate files +.PP +If a cloud storage system allows duplicate files then it can have two +objects with the same name. +.PP +This confuses rclone greatly when syncing. +.SS Google Drive +.PP +Paths are specified as \f[C]drive:path\f[] +.PP +Drive paths may be as deep as required, eg +\f[C]drive:directory/subdirectory\f[]. +.PP +The initial setup for drive involves getting a token from Google drive +which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n)\ New\ remote +d)\ Delete\ remote +q)\ Quit\ config +e/n/d/q>\ n +name>\ remote +What\ type\ of\ source\ is\ it? +Choose\ a\ number\ from\ below +\ 1)\ swift +\ 2)\ s3 +\ 3)\ local +\ 4)\ drive +type>\ 4 +Google\ Application\ Client\ Id\ \-\ leave\ blank\ to\ use\ rclone\[aq]s. +client_id>\ +Google\ Application\ Client\ Secret\ \-\ leave\ blank\ to\ use\ rclone\[aq]s. +client_secret>\ +Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +client_id\ =\ +client_secret\ =\ +token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall, or use +manual mode. +.PP +You can then use it like this, +.PP +List directories in top level of your drive +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your drive +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to a drive directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS Modified time +.PP +Google drive stores modification times accurate to 1 ms. +.SS Revisions +.PP +Google drive stores revisions of files. +When you upload a change to an existing file to google drive using +rclone it will create a new revision of that file. +.PP +Revisions follow the standard google policy which at time of writing was +.IP \[bu] 2 +They are deleted after 30 days or 100 revisions (whatever comes first). +.IP \[bu] 2 +They do not count towards a user storage quota. +.SS Deleting files +.PP +By default rclone will delete files permanently when requested. +If sending them to the trash is required instead then use the +\f[C]\-\-drive\-use\-trash\f[] flag. +.SS Limitations +.PP +Drive has quite a lot of rate limiting. +This causes rclone to be limited to transferring about 2 files per +second only. +Individual files may be transferred much faster at 100s of MBytes/s but +lots of small files can take a long time. +.SS Amazon S3 +.PP +Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for +the \f[C]lsd\f[] command.) You may put subdirectories in too, eg +\f[C]remote:bucket/path/to/dir\f[]. +.PP +Here is an example of making an s3 configuration. +First run +.IP +.nf +\f[C] +rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +q)\ Quit\ config +n/q>\ n +name>\ remote +What\ type\ of\ source\ is\ it? +Choose\ a\ number\ from\ below +\ 1)\ swift +\ 2)\ s3 +\ 3)\ local +\ 4)\ google\ cloud\ storage +\ 5)\ dropbox +\ 6)\ drive +type>\ 2 +AWS\ Access\ Key\ ID. +access_key_id>\ accesskey +AWS\ Secret\ Access\ Key\ (password).\ +secret_access_key>\ secretaccesskey +Region\ to\ connect\ to. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ *\ The\ default\ endpoint\ \-\ a\ good\ choice\ if\ you\ are\ unsure. +\ *\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest. +\ *\ Leave\ location\ constraint\ empty. +\ 1)\ us\-east\-1 +\ *\ US\ West\ (Oregon)\ Region +\ *\ Needs\ location\ constraint\ us\-west\-2. +\ 2)\ us\-west\-2 +[snip] +\ *\ South\ America\ (Sao\ Paulo)\ Region +\ *\ Needs\ location\ constraint\ sa\-east\-1. +\ 9)\ sa\-east\-1 +\ *\ If\ using\ an\ S3\ clone\ that\ only\ understands\ v2\ signatures\ \-\ eg\ Ceph\ \-\ set\ this\ and\ make\ sure\ you\ set\ the\ endpoint. +10)\ other\-v2\-signature +\ *\ If\ using\ an\ S3\ clone\ that\ understands\ v4\ signatures\ set\ this\ and\ make\ sure\ you\ set\ the\ endpoint. +11)\ other\-v4\-signature +region>\ 1 +Endpoint\ for\ S3\ API. +Leave\ blank\ if\ using\ AWS\ to\ use\ the\ default\ endpoint\ for\ the\ region. +Specify\ if\ using\ an\ S3\ clone\ such\ as\ Ceph. +endpoint>\ +Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region.\ Used\ when\ creating\ buckets\ only. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ *\ Empty\ for\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest. +\ 1)\ +\ *\ US\ West\ (Oregon)\ Region. +\ 2)\ us\-west\-2 +\ *\ US\ West\ (Northern\ California)\ Region. +\ 3)\ us\-west\-1 +\ *\ EU\ (Ireland)\ Region. +\ 4)\ eu\-west\-1 +[snip] +location_constraint>\ 1 +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +access_key_id\ =\ accesskey +secret_access_key\ =\ secretaccesskey +region\ =\ us\-east\-1 +endpoint\ =\ +location_constraint\ =\ +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +Current\ remotes: + +Name\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Type +====\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ==== +remote\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ s3 + +e)\ Edit\ existing\ remote +n)\ New\ remote +d)\ Delete\ remote +q)\ Quit\ config +e/n/d/q>\ q +\f[] +.fi +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all buckets +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new bucket +.IP +.nf +\f[C] +rclone\ mkdir\ remote:bucket +\f[] +.fi +.PP +List the contents of a bucket +.IP +.nf +\f[C] +rclone\ ls\ remote:bucket +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any +excess files in the bucket. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:bucket +\f[] +.fi +.SS Modified time +.PP +The modified time is stored as metadata on the object as +\f[C]X\-Amz\-Meta\-Mtime\f[] as floating point since the epoch accurate +to 1 ns. +.SS Multipart uploads +.PP +rclone supports multipart uploads with S3 which means that it can upload +files bigger than 5GB. +Note that files uploaded with multipart upload don\[aq]t have an MD5SUM. +.SS Buckets and Regions +.PP +With Amazon S3 you can list buckets (\f[C]rclone\ lsd\f[]) using any +region, but you can only access the content of a bucket from the region +it was created in. +If you attempt to access a bucket from the wrong region, you will get an +error, +\f[C]incorrect\ region,\ the\ bucket\ is\ not\ in\ \[aq]XXX\[aq]\ region\f[]. +.SS Ceph +.PP +Ceph is an object storage system which presents an Amazon S3 interface. +.PP +To use rclone with ceph, you need to set the following parameters in the +config. +.IP +.nf +\f[C] +access_key_id\ =\ Whatever +secret_access_key\ =\ Whatever +endpoint\ =\ https://ceph.endpoint.goes.here/ +region\ =\ other\-v2\-signature +\f[] +.fi +.PP +Note also that Ceph sometimes puts \f[C]/\f[] in the passwords it gives +users. +If you read the secret access key using the command line tools you will +get a JSON blob with the \f[C]/\f[] escaped as \f[C]\\/\f[]. +Make sure you only write \f[C]/\f[] in the secret access key. +.PP +Eg the dump from Ceph looks something like this (irrelevant keys +removed). +.IP +.nf +\f[C] +{ +\ \ \ \ "user_id":\ "xxx", +\ \ \ \ "display_name":\ "xxxx", +\ \ \ \ "keys":\ [ +\ \ \ \ \ \ \ \ { +\ \ \ \ \ \ \ \ \ \ \ \ "user":\ "xxx", +\ \ \ \ \ \ \ \ \ \ \ \ "access_key":\ "xxxxxx", +\ \ \ \ \ \ \ \ \ \ \ \ "secret_key":\ "xxxxxx\\/xxxx" +\ \ \ \ \ \ \ \ } +\ \ \ \ ], +} +\f[] +.fi +.PP +Because this is a json dump, it is encoding the \f[C]/\f[] as +\f[C]\\/\f[], so if you use the secret key as \f[C]xxxxxx/xxxx\f[] it +will work fine. +.SS Swift +.PP +Swift refers to Openstack Object +Storage (http://www.openstack.org/software/openstack-storage/). +Commercial implementations of that being: +.IP \[bu] 2 +Rackspace Cloud Files (http://www.rackspace.com/cloud/files/) +.IP \[bu] 2 +Memset Memstore (http://www.memset.com/cloud/storage/) +.PP +Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[] +for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg +\f[C]remote:container/path/to/dir\f[]. +.PP +Here is an example of making a swift configuration. +First run +.IP +.nf +\f[C] +rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No\ remotes\ found\ \-\ make\ a\ new\ one +n)\ New\ remote +q)\ Quit\ config +n/q>\ n +name>\ remote +What\ type\ of\ source\ is\ it? +Choose\ a\ number\ from\ below +\ 1)\ swift +\ 2)\ s3 +\ 3)\ local +\ 4)\ drive +type>\ 1 +User\ name\ to\ log\ in. +user>\ user_name +API\ key\ or\ password. +key>\ password_or_api_key +Authentication\ URL\ for\ server. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ *\ Rackspace\ US +\ 1)\ https://auth.api.rackspacecloud.com/v1.0 +\ *\ Rackspace\ UK +\ 2)\ https://lon.auth.api.rackspacecloud.com/v1.0 +\ *\ Rackspace\ v2 +\ 3)\ https://identity.api.rackspacecloud.com/v2.0 +\ *\ Memset\ Memstore\ UK +\ 4)\ https://auth.storage.memset.com/v1.0 +\ *\ Memset\ Memstore\ UK\ v2 +\ 5)\ https://auth.storage.memset.com/v2.0 +auth>\ 1 +Tenant\ name\ \-\ optional +tenant> +Remote\ config +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +user\ =\ user_name +key\ =\ password_or_api_key +auth\ =\ https://auth.api.rackspacecloud.com/v1.0 +tenant\ = +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all containers +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new container +.IP +.nf +\f[C] +rclone\ mkdir\ remote:container +\f[] +.fi +.PP +List the contents of a container +.IP +.nf +\f[C] +rclone\ ls\ remote:container +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote container, deleting +any excess files in the container. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:container +\f[] +.fi +.SS Modified time +.PP +The modified time is stored as metadata on the object as +\f[C]X\-Object\-Meta\-Mtime\f[] as floating point since the epoch +accurate to 1 ns. +.PP +This is a defacto standard (used in the official python\-swiftclient +amongst others) for storing the modification time for an object. +.SS Dropbox +.PP +Paths are specified as \f[C]remote:path\f[] +.PP +Dropbox paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +The initial setup for dropbox involves getting a token from Dropbox +which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n)\ New\ remote +d)\ Delete\ remote +q)\ Quit\ config +e/n/d/q>\ n +name>\ remote +What\ type\ of\ source\ is\ it? +Choose\ a\ number\ from\ below +\ 1)\ swift +\ 2)\ s3 +\ 3)\ local +\ 4)\ google\ cloud\ storage +\ 5)\ dropbox +\ 6)\ drive +type>\ 5 +Dropbox\ App\ Key\ \-\ leave\ blank\ to\ use\ rclone\[aq]s. +app_key>\ +Dropbox\ App\ Secret\ \-\ leave\ blank\ to\ use\ rclone\[aq]s. +app_secret>\ +Remote\ config +Please\ visit: +https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code +Enter\ the\ code:\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +app_key\ =\ +app_secret\ =\ +token\ =\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +You can then use it like this, +.PP +List directories in top level of your dropbox +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your dropbox +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to a dropbox directory called backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS Modified time and MD5SUMs +.PP +Dropbox doesn\[aq]t have the capability of storing modification times or +MD5SUMs so syncs will effectively have the \f[C]\-\-size\-only\f[] flag +set. +.SS Limitations +.PP +Note that Dropbox is case sensitive so you can\[aq]t have a file called +"Hello.doc" and one called "hello.doc". +.PP +There are some file names such as \f[C]thumbs.db\f[] which Dropbox +can\[aq]t store. +There is a full list of them in the "Ignored Files" section of this +document (https://www.dropbox.com/en/help/145). +Rclone will issue an error message +\f[C]File\ name\ disallowed\ \-\ not\ uploading\f[] if it attempt to +upload one of those file names, but the sync won\[aq]t fail. +.SS Google Cloud Storage +.PP +Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for +the \f[C]lsd\f[] command.) You may put subdirectories in too, eg +\f[C]remote:bucket/path/to/dir\f[]. +.PP +The initial setup for google cloud storage involves getting a token from +Google Cloud Storage which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n)\ New\ remote +d)\ Delete\ remote +q)\ Quit\ config +e/n/d/q>\ n +name>\ remote +What\ type\ of\ source\ is\ it? +Choose\ a\ number\ from\ below +\ 1)\ swift +\ 2)\ s3 +\ 3)\ local +\ 4)\ google\ cloud\ storage +\ 5)\ dropbox +\ 6)\ drive +type>\ 4 +Google\ Application\ Client\ Id\ \-\ leave\ blank\ to\ use\ rclone\[aq]s. +client_id>\ +Google\ Application\ Client\ Secret\ \-\ leave\ blank\ to\ use\ rclone\[aq]s. +client_secret>\ +Project\ number\ optional\ \-\ needed\ only\ for\ list/create/delete\ buckets\ \-\ see\ your\ developer\ console. +project_number>\ 12345678 +Access\ Control\ List\ for\ new\ objects. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access. +\ 1)\ authenticatedRead +\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ OWNER\ access. +\ 2)\ bucketOwnerFullControl +\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ READER\ access. +\ 3)\ bucketOwnerRead +\ *\ Object\ owner\ gets\ OWNER\ access\ [default\ if\ left\ blank]. +\ 4)\ private +\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ members\ get\ access\ according\ to\ their\ roles. +\ 5)\ projectPrivate +\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access. +\ 6)\ publicRead +object_acl>\ 4 +Access\ Control\ List\ for\ new\ buckets. +Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value +\ *\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access. +\ 1)\ authenticatedRead +\ *\ Project\ team\ owners\ get\ OWNER\ access\ [default\ if\ left\ blank]. +\ 2)\ private +\ *\ Project\ team\ members\ get\ access\ according\ to\ their\ roles. +\ 3)\ projectPrivate +\ *\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access. +\ 4)\ publicRead +\ *\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ WRITER\ access. +\ 5)\ publicReadWrite +bucket_acl>\ 2 +Remote\ config +Remote\ config +Use\ auto\ config? +\ *\ Say\ Y\ if\ not\ sure +\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine +y)\ Yes +n)\ No +y/n>\ y +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +type\ =\ google\ cloud\ storage +client_id\ =\ +client_secret\ =\ +token\ =\ {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014\-07\-17T20:49:14.929208288+01:00","Extra":null} +project_number\ =\ 12345678 +object_acl\ =\ private +bucket_acl\ =\ private +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Google if you use auto config mode. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall, or use +manual mode. +.PP +This remote is called \f[C]remote\f[] and can now be used like this +.PP +See all the buckets in your project +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +Make a new bucket +.IP +.nf +\f[C] +rclone\ mkdir\ remote:bucket +\f[] +.fi +.PP +List the contents of a bucket +.IP +.nf +\f[C] +rclone\ ls\ remote:bucket +\f[] +.fi +.PP +Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any +excess files in the bucket. +.IP +.nf +\f[C] +rclone\ sync\ /home/local/directory\ remote:bucket +\f[] +.fi +.SS Modified time +.PP +Google google cloud storage stores md5sums natively and rclone stores +modification times as metadata on the object, under the "mtime" key in +RFC3339 format accurate to 1ns. +.SS Amazon Cloud Drive +.PP +Paths are specified as \f[C]remote:path\f[] +.PP +Paths may be as deep as required, eg +\f[C]remote:directory/subdirectory\f[]. +.PP +The initial setup for Amazon cloud drive involves getting a token from +Amazon which you need to do in your browser. +\f[C]rclone\ config\f[] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[]. +First run: +.IP +.nf +\f[C] +\ rclone\ config +\f[] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n)\ New\ remote +d)\ Delete\ remote +q)\ Quit\ config +e/n/d/q>\ n +name>\ remote +What\ type\ of\ source\ is\ it? +Choose\ a\ number\ from\ below +\ 1)\ amazon\ cloud\ drive +\ 2)\ drive +\ 3)\ dropbox +\ 4)\ google\ cloud\ storage +\ 5)\ local +\ 6)\ s3 +\ 7)\ swift +type>\ 1 +Amazon\ Application\ Client\ Id\ \-\ leave\ blank\ to\ use\ rclone\[aq]s. +client_id>\ +Amazon\ Application\ Client\ Secret\ \-\ leave\ blank\ to\ use\ rclone\[aq]s. +client_secret>\ +Remote\ config +If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth +Log\ in\ and\ authorize\ rclone\ for\ access +Waiting\ for\ code... +Got\ code +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +[remote] +client_id\ =\ +client_secret\ =\ +token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015\-09\-06T16:07:39.658438471+01:00"} +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- +y)\ Yes\ this\ is\ OK +e)\ Edit\ this\ remote +d)\ Delete\ this\ remote +y/e/d>\ y +\f[] +.fi +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from Amazon. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[] like this, +.PP +List directories in top level of your Amazon cloud drive +.IP +.nf +\f[C] +rclone\ lsd\ remote: +\f[] +.fi +.PP +List all the files in your Amazon cloud drive +.IP +.nf +\f[C] +rclone\ ls\ remote: +\f[] +.fi +.PP +To copy a local directory to an Amazon cloud drive directory called +backup +.IP +.nf +\f[C] +rclone\ copy\ /home/source\ remote:backup +\f[] +.fi +.SS Modified time and MD5SUMs +.PP +Amazon cloud drive doesn\[aq]t allow modification times to be changed +via the API so these won\[aq]t be accurate or used for syncing. +.PP +It does store MD5SUMs so for a more accurate sync, you can use the +\f[C]\-\-checksum\f[] flag. +.SS Deleting files +.PP +Any files you delete with rclone will end up in the trash. +Amazon don\[aq]t provide an API to permanently delete files, nor to +empty the trash, so you will have to do that with one of Amazon\[aq]s +apps or via the Amazon cloud drive website. +.SS Limitations +.PP +Note that Amazon cloud drive is case sensitive so you can\[aq]t have a +file called "Hello.doc" and one called "hello.doc". +.PP +Amazon cloud drive has rate limiting so you may notice errors in the +sync (429 errors). +rclone will automatically retry the sync up to 3 times by default (see +\f[C]\-\-retries\f[] flag) which should hopefully work around this +problem. +.SS Local Filesystem +.PP +Local paths are specified as normal filesystem paths, eg +\f[C]/path/to/wherever\f[], so +.IP +.nf +\f[C] +rclone\ sync\ /home/source\ /tmp/destination +\f[] +.fi +.PP +Will sync \f[C]/home/source\f[] to \f[C]/tmp/destination\f[] +.PP +These can be configured into the config file for consistencies sake, but +it is probably easier not to. +.SS Modified time +.PP +Rclone reads and writes the modified time using an accuracy determined +by the OS. +Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X. +.SS Filenames +.PP +Filenames are expected to be encoded in UTF\-8 on disk. +This is the normal case for Windows and OS X. +There is a bit more uncertainty in the Linux world, but new +distributions will have UTF\-8 encoded files names. +.PP +If an invalid (non\-UTF8) filename is read, the invalid caracters will +be replaced with the unicode replacement character, \[aq]�\[aq]. +\f[C]rclone\f[] will emit a debug message in this case (use \f[C]\-v\f[] +to see), eg +.IP +.nf +\f[C] +Local\ file\ system\ at\ .:\ Replacing\ invalid\ UTF\-8\ characters\ in\ "gro\\xdf" +\f[] +.fi +.SS Changelog +.IP \[bu] 2 +v1.20 \- 2015\-09\-15 +.RS 2 +.IP \[bu] 2 +New features +.IP \[bu] 2 +Amazon Cloud Drive support +.IP \[bu] 2 +Oauth support redone \- fix many bugs and improve usability +.RS 2 +.IP \[bu] 2 +Use "golang.org/x/oauth2" as oauth libary of choice +.IP \[bu] 2 +Improve oauth usability for smoother initial signup +.IP \[bu] 2 +drive, googlecloudstorage: optionally use auto config for the oauth +token +.RE +.IP \[bu] 2 +Implement \-\-dump\-headers and \-\-dump\-bodies debug flags +.IP \[bu] 2 +Show multiple matched commands if abbreviation too short +.IP \[bu] 2 +Implement server side move where possible +.IP \[bu] 2 +local +.IP \[bu] 2 +Always use UNC paths internally on Windows \- fixes a lot of bugs +.IP \[bu] 2 +dropbox +.IP \[bu] 2 +force use of our custom transport which makes timeouts work +.IP \[bu] 2 +Thanks to Klaus Post for lots of help with this release +.RE +.IP \[bu] 2 +v1.19 \- 2015\-08\-28 +.RS 2 +.IP \[bu] 2 +New features +.IP \[bu] 2 +Server side copies for s3/swift/drive/dropbox/gcs +.IP \[bu] 2 +Move command \- uses server side copies if it can +.IP \[bu] 2 +Implement \-\-retries flag \- tries 3 times by default +.IP \[bu] 2 +Build for plan9/amd64 and solaris/amd64 too +.IP \[bu] 2 +Fixes +.IP \[bu] 2 +Make a current version download with a fixed URL for scripting +.IP \[bu] 2 +Ignore rmdir in limited fs rather than throwing error +.IP \[bu] 2 +dropbox +.IP \[bu] 2 +Increase chunk size to improve upload speeds massively +.IP \[bu] 2 +Issue an error message when trying to upload bad file name +.RE +.IP \[bu] 2 +v1.18 \- 2015\-08\-17 +.RS 2 +.IP \[bu] 2 +drive +.IP \[bu] 2 +Add \f[C]\-\-drive\-use\-trash\f[] flag so rclone trashes instead of +deletes +.IP \[bu] 2 +Add "Forbidden to download" message for files with no downloadURL +.IP \[bu] 2 +dropbox +.IP \[bu] 2 +Remove datastore +.RS 2 +.IP \[bu] 2 +This was deprecated and it caused a lot of problems +.IP \[bu] 2 +Modification times and MD5SUMs no longer stored +.RE +.IP \[bu] 2 +Fix uploading files > 2GB +.IP \[bu] 2 +s3 +.IP \[bu] 2 +use official AWS SDK from github.com/aws/aws\-sdk\-go +.IP \[bu] 2 +\f[B]NB\f[] will most likely require you to delete and recreate remote +.IP \[bu] 2 +enable multipart upload which enables files > 5GB +.IP \[bu] 2 +tested with Ceph / RadosGW / S3 emulation +.IP \[bu] 2 +many thanks to Sam Liston and Brian Haymore at the Utah Center for High +Performance Computing (https://www.chpc.utah.edu/) for a Ceph test +account +.IP \[bu] 2 +misc +.IP \[bu] 2 +Show errors when reading the config file +.IP \[bu] 2 +Do not print stats in quiet mode \- thanks Leonid Shalupov +.IP \[bu] 2 +Add FAQ +.IP \[bu] 2 +Fix created directories not obeying umask +.IP \[bu] 2 +Linux installation instructions \- thanks Shimon Doodkin +.RE +.IP \[bu] 2 +v1.17 \- 2015\-06\-14 +.RS 2 +.IP \[bu] 2 +dropbox: fix case insensitivity issues \- thanks Leonid Shalupov +.RE +.IP \[bu] 2 +v1.16 \- 2015\-06\-09 +.RS 2 +.IP \[bu] 2 +Fix uploading big files which was causing timeouts or panics +.IP \[bu] 2 +Don\[aq]t check md5sum after download with \-\-size\-only +.RE +.IP \[bu] 2 +v1.15 \- 2015\-06\-06 +.RS 2 +.IP \[bu] 2 +Add \-\-checksum flag to only discard transfers by MD5SUM \- thanks Alex +Couper +.IP \[bu] 2 +Implement \-\-size\-only flag to sync on size not checksum & modtime +.IP \[bu] 2 +Expand docs and remove duplicated information +.IP \[bu] 2 +Document rclone\[aq]s limitations with directories +.IP \[bu] 2 +dropbox: update docs about case insensitivity +.RE +.IP \[bu] 2 +v1.14 \- 2015\-05\-21 +.RS 2 +.IP \[bu] 2 +local: fix encoding of non utf\-8 file names \- fixes a duplicate file +problem +.IP \[bu] 2 +drive: docs about rate limiting +.IP \[bu] 2 +google cloud storage: Fix compile after API change in +"google.golang.org/api/storage/v1" +.RE +.IP \[bu] 2 +v1.13 \- 2015\-05\-10 +.RS 2 +.IP \[bu] 2 +Revise documentation (especially sync) +.IP \[bu] 2 +Implement \-\-timeout and \-\-conntimeout +.IP \[bu] 2 +s3: ignore etags from multipart uploads which aren\[aq]t md5sums +.RE +.IP \[bu] 2 +v1.12 \- 2015\-03\-15 +.RS 2 +.IP \[bu] 2 +drive: Use chunked upload for files above a certain size +.IP \[bu] 2 +drive: add \-\-drive\-chunk\-size and \-\-drive\-upload\-cutoff +parameters +.IP \[bu] 2 +drive: switch to insert from update when a failed copy deletes the +upload +.IP \[bu] 2 +core: Log duplicate files if they are detected +.RE +.IP \[bu] 2 +v1.11 \- 2015\-03\-04 +.RS 2 +.IP \[bu] 2 +swift: add region parameter +.IP \[bu] 2 +drive: fix crash on failed to update remote mtime +.IP \[bu] 2 +In remote paths, change native directory separators to / +.IP \[bu] 2 +Add synchronization to ls/lsl/lsd output to stop corruptions +.IP \[bu] 2 +Ensure all stats/log messages to go stderr +.IP \[bu] 2 +Add \-\-log\-file flag to log everything (including panics) to file +.IP \[bu] 2 +Make it possible to disable stats printing with \-\-stats=0 +.IP \[bu] 2 +Implement \-\-bwlimit to limit data transfer bandwidth +.RE +.IP \[bu] 2 +v1.10 \- 2015\-02\-12 +.RS 2 +.IP \[bu] 2 +s3: list an unlimited number of items +.IP \[bu] 2 +Fix getting stuck in the configurator +.RE +.IP \[bu] 2 +v1.09 \- 2015\-02\-07 +.RS 2 +.IP \[bu] 2 +windows: Stop drive letters (eg C:) getting mixed up with remotes (eg +drive:) +.IP \[bu] 2 +local: Fix directory separators on Windows +.IP \[bu] 2 +drive: fix rate limit exceeded errors +.RE +.IP \[bu] 2 +v1.08 \- 2015\-02\-04 +.RS 2 +.IP \[bu] 2 +drive: fix subdirectory listing to not list entire drive +.IP \[bu] 2 +drive: Fix SetModTime +.IP \[bu] 2 +dropbox: adapt code to recent library changes +.RE +.IP \[bu] 2 +v1.07 \- 2014\-12\-23 +.RS 2 +.IP \[bu] 2 +google cloud storage: fix memory leak +.RE +.IP \[bu] 2 +v1.06 \- 2014\-12\-12 +.RS 2 +.IP \[bu] 2 +Fix "Couldn\[aq]t find home directory" on OSX +.IP \[bu] 2 +swift: Add tenant parameter +.IP \[bu] 2 +Use new location of Google API packages +.RE +.IP \[bu] 2 +v1.05 \- 2014\-08\-09 +.RS 2 +.IP \[bu] 2 +Improved tests and consequently lots of minor fixes +.IP \[bu] 2 +core: Fix race detected by go race detector +.IP \[bu] 2 +core: Fixes after running errcheck +.IP \[bu] 2 +drive: reset root directory on Rmdir and Purge +.IP \[bu] 2 +fs: Document that Purger returns error on empty directory, test and fix +.IP \[bu] 2 +google cloud storage: fix ListDir on subdirectory +.IP \[bu] 2 +google cloud storage: re\-read metadata in SetModTime +.IP \[bu] 2 +s3: make reading metadata more reliable to work around eventual +consistency problems +.IP \[bu] 2 +s3: strip trailing / from ListDir() +.IP \[bu] 2 +swift: return directories without / in ListDir +.RE +.IP \[bu] 2 +v1.04 \- 2014\-07\-21 +.RS 2 +.IP \[bu] 2 +google cloud storage: Fix crash on Update +.RE +.IP \[bu] 2 +v1.03 \- 2014\-07\-20 +.RS 2 +.IP \[bu] 2 +swift, s3, dropbox: fix updated files being marked as corrupted +.IP \[bu] 2 +Make compile with go 1.1 again +.RE +.IP \[bu] 2 +v1.02 \- 2014\-07\-19 +.RS 2 +.IP \[bu] 2 +Implement Dropbox remote +.IP \[bu] 2 +Implement Google Cloud Storage remote +.IP \[bu] 2 +Verify Md5sums and Sizes after copies +.IP \[bu] 2 +Remove times from "ls" command \- lists sizes only +.IP \[bu] 2 +Add add "lsl" \- lists times and sizes +.IP \[bu] 2 +Add "md5sum" command +.RE +.IP \[bu] 2 +v1.01 \- 2014\-07\-04 +.RS 2 +.IP \[bu] 2 +drive: fix transfer of big files using up lots of memory +.RE +.IP \[bu] 2 +v1.00 \- 2014\-07\-03 +.RS 2 +.IP \[bu] 2 +drive: fix whole second dates +.RE +.IP \[bu] 2 +v0.99 \- 2014\-06\-26 +.RS 2 +.IP \[bu] 2 +Fix \-\-dry\-run not working +.IP \[bu] 2 +Make compatible with go 1.1 +.RE +.IP \[bu] 2 +v0.98 \- 2014\-05\-30 +.RS 2 +.IP \[bu] 2 +s3: Treat missing Content\-Length as 0 for some ceph installations +.IP \[bu] 2 +rclonetest: add file with a space in +.RE +.IP \[bu] 2 +v0.97 \- 2014\-05\-05 +.RS 2 +.IP \[bu] 2 +Implement copying of single files +.IP \[bu] 2 +s3 & swift: support paths inside containers/buckets +.RE +.IP \[bu] 2 +v0.96 \- 2014\-04\-24 +.RS 2 +.IP \[bu] 2 +drive: Fix multiple files of same name being created +.IP \[bu] 2 +drive: Use o.Update and fs.Put to optimise transfers +.IP \[bu] 2 +Add version number, \-V and \-\-version +.RE +.IP \[bu] 2 +v0.95 \- 2014\-03\-28 +.RS 2 +.IP \[bu] 2 +rclone.org: website, docs and graphics +.IP \[bu] 2 +drive: fix path parsing +.RE +.IP \[bu] 2 +v0.94 \- 2014\-03\-27 +.RS 2 +.IP \[bu] 2 +Change remote format one last time +.IP \[bu] 2 +GNU style flags +.RE +.IP \[bu] 2 +v0.93 \- 2014\-03\-16 +.RS 2 +.IP \[bu] 2 +drive: store token in config file +.IP \[bu] 2 +cross compile other versions +.IP \[bu] 2 +set strict permissions on config file +.RE +.IP \[bu] 2 +v0.92 \- 2014\-03\-15 +.RS 2 +.IP \[bu] 2 +Config fixes and \-\-config option +.RE +.IP \[bu] 2 +v0.91 \- 2014\-03\-15 +.RS 2 +.IP \[bu] 2 +Make config file +.RE +.IP \[bu] 2 +v0.90 \- 2013\-06\-27 +.RS 2 +.IP \[bu] 2 +Project named rclone +.RE +.IP \[bu] 2 +v0.00 \- 2012\-11\-18 +.RS 2 +.IP \[bu] 2 +Project started +.RE +.SS Bugs and Limitations +.SS Empty directories are left behind / not created +.PP +With remotes that have a concept of directory, eg Local and Drive, empty +directories may be left behind, or not created when one was expected. +.PP +This is because rclone doesn\[aq]t have a concept of a directory \- it +only works on objects. +Most of the object storage systems can\[aq]t actually store a directory +so there is nowhere for rclone to store anything about directories. +.PP +You can work round this to some extent with the\f[C]purge\f[] command +which will delete everything under the path, \f[B]inluding\f[] empty +directories. +.PP +This may be fixed at some point in Issue +#100 (https://github.com/ncw/rclone/issues/100) +.SS Directory timestamps aren\[aq]t preserved +.PP +For the same reason as the above, rclone doesn\[aq]t have a concept of a +directory \- it only works on objects, therefore it can\[aq]t preserve +the timestamps of directories. +.SS Frequently Asked Questions +.SS Do all cloud storage systems support all rclone commands +.PP +Yes they do. +All the rclone commands (eg \f[C]sync\f[], \f[C]copy\f[] etc) will work +on all the remote storage systems. +.SS Can I copy the config from one machine to another +.PP +Sure! Rclone stores all of its config in a single file. +If you want to find this file, the simplest way is to run +\f[C]rclone\ \-h\f[] and look at the help for the \f[C]\-\-config\f[] +flag which will tell you where it is. +Eg, +.IP +.nf +\f[C] +$\ rclone\ \-h +Sync\ files\ and\ directories\ to\ and\ from\ local\ and\ remote\ object\ stores\ \-\ v1.18. +[snip] +Options: +\ \ \ \ \ \ \-\-bwlimit=0:\ Bandwidth\ limit\ in\ kBytes/s,\ or\ use\ suffix\ k|M|G +\ \ \ \ \ \ \-\-checkers=8:\ Number\ of\ checkers\ to\ run\ in\ parallel. +\ \ \-c,\ \-\-checksum=false:\ Skip\ based\ on\ checksum\ &\ size,\ not\ mod\-time\ &\ size +\ \ \ \ \ \ \-\-config="/home/user/.rclone.conf":\ Config\ file. +[snip] +\f[] +.fi +.PP +So in this config the config file can be found in +\f[C]/home/user/.rclone.conf\f[]. +.PP +Just copy that to the equivalent place in the destination (run +\f[C]rclone\ \-h\f[] above again on the destination machine if not +sure). +.SS Can rclone sync directly from drive to s3 +.PP +Rclone can sync between two remote cloud storage systems just fine. +.PP +Note that it effectively downloads the file and uploads it again, so the +node running rclone would need to have lots of bandwidth. +.PP +The syncs would be incremental (on a file by file basis). +.PP +Eg +.IP +.nf +\f[C] +rclone\ sync\ drive:Folder\ s3:bucket +\f[] +.fi +.SS Using rclone from multiple locations at the same time +.PP +You can use rclone from multiple places at the same time if you choose +different subdirectory for the output, eg +.IP +.nf +\f[C] +Server\ A>\ rclone\ sync\ /tmp/whatever\ remote:ServerA +Server\ B>\ rclone\ sync\ /tmp/whatever\ remote:ServerB +\f[] +.fi +.PP +If you sync to the same directory then you should use rclone copy +otherwise the two rclones may delete each others files, eg +.IP +.nf +\f[C] +Server\ A>\ rclone\ copy\ /tmp/whatever\ remote:Backup +Server\ B>\ rclone\ copy\ /tmp/whatever\ remote:Backup +\f[] +.fi +.PP +The file names you upload from Server A and Server B should be different +in this case, otherwise some file systems (eg Drive) may make +duplicates. +.SS Why doesn\[aq]t rclone support partial transfers / binary diffs like +rsync? +.PP +Rclone stores each file you transfer as a native object on the remote +cloud storage system. +This means that you can see the files you upload as expected using +alternative access methods (eg using the Google Drive web interface). +There is a 1:1 mapping between files on your hard disk and objects +created in the cloud storage system. +.PP +Cloud storage systems (at least none I\[aq]ve come across yet) don\[aq]t +support partially uploading an object. +You can\[aq]t take an existing object, and change some bytes in the +middle of it. +.PP +It would be possible to make a sync system which stored binary diffs +instead of whole objects like rclone does, but that would break the 1:1 +mapping of files on your hard disk to objects in the remote cloud +storage system. +.PP +All the cloud storage systems support partial downloads of content, so +it would be possible to make partial downloads work. +However to make this work efficiently this would require storing a +significant amount of metadata, which breaks the desired 1:1 mapping of +files to objects. +.SS Can rclone do bi\-directional sync? +.PP +No, not at present. +rclone only does uni\-directional sync from A \-> B. +It may do in the future though since it has all the primitives \- it +just requires writing the algorithm to do it. +.SS License +.PP +This is free software under the terms of MIT the license (check the +COPYING file included with the source code). +.IP +.nf +\f[C] +Copyright\ (C)\ 2012\ by\ Nick\ Craig\-Wood\ http://www.craig\-wood.com/nick/ + +Permission\ is\ hereby\ granted,\ free\ of\ charge,\ to\ any\ person\ obtaining\ a\ copy +of\ this\ software\ and\ associated\ documentation\ files\ (the\ "Software"),\ to\ deal +in\ the\ Software\ without\ restriction,\ including\ without\ limitation\ the\ rights +to\ use,\ copy,\ modify,\ merge,\ publish,\ distribute,\ sublicense,\ and/or\ sell +copies\ of\ the\ Software,\ and\ to\ permit\ persons\ to\ whom\ the\ Software\ is +furnished\ to\ do\ so,\ subject\ to\ the\ following\ conditions: + +The\ above\ copyright\ notice\ and\ this\ permission\ notice\ shall\ be\ included\ in +all\ copies\ or\ substantial\ portions\ of\ the\ Software. + +THE\ SOFTWARE\ IS\ PROVIDED\ "AS\ IS",\ WITHOUT\ WARRANTY\ OF\ ANY\ KIND,\ EXPRESS\ OR +IMPLIED,\ INCLUDING\ BUT\ NOT\ LIMITED\ TO\ THE\ WARRANTIES\ OF\ MERCHANTABILITY, +FITNESS\ FOR\ A\ PARTICULAR\ PURPOSE\ AND\ NONINFRINGEMENT.\ IN\ NO\ EVENT\ SHALL\ THE +AUTHORS\ OR\ COPYRIGHT\ HOLDERS\ BE\ LIABLE\ FOR\ ANY\ CLAIM,\ DAMAGES\ OR\ OTHER +LIABILITY,\ WHETHER\ IN\ AN\ ACTION\ OF\ CONTRACT,\ TORT\ OR\ OTHERWISE,\ ARISING\ FROM, +OUT\ OF\ OR\ IN\ CONNECTION\ WITH\ THE\ SOFTWARE\ OR\ THE\ USE\ OR\ OTHER\ DEALINGS\ IN +THE\ SOFTWARE. +\f[] +.fi +.SS Authors +.IP \[bu] 2 +Nick Craig\-Wood +.SS Contributors +.IP \[bu] 2 +Alex Couper +.IP \[bu] 2 +Leonid Shalupov +.IP \[bu] 2 +Shimon Doodkin +.IP \[bu] 2 +Colin Nicholson +.IP \[bu] 2 +Klaus Post +.SS Contact the rclone project +.PP +The project website is at: +.IP \[bu] 2 +https://github.com/ncw/rclone +.PP +There you can file bug reports, ask for help or contribute pull +requests. +.PP +See also +.IP \[bu] 2 +Google+ page for general comments +.RS 2 +.RE +.PP +Or email Nick Craig\-Wood (mailto:nick@craig-wood.com) +.SH AUTHORS +Nick Craig\-Wood. diff --git a/srcpkgs/rclone/template b/srcpkgs/rclone/template new file mode 100644 index 00000000000..5b051b0055f --- /dev/null +++ b/srcpkgs/rclone/template @@ -0,0 +1,18 @@ +# Template file for 'rclone' +pkgname=rclone +version=1.20 +revision=1 +build_style=go +go_import_path="github.com/ncw/rclone" +hostmakedepends="git" +short_desc="A rsync for cloud storage" +maintainer="Diogo Leal " +license="MIT" +homepage="http://rclone.org/downloads/" +distfiles="https://github.com/ncw/rclone/archive/v${version}.tar.gz" +checksum=e8a273474bf2ae8a2b0b6f01f29bf65524f9bd04774490608225ab8591d9ce08 + +post_install(){ + vlicense COPYING + vman ${FILESDIR}/rclone.1 +}