rclone beta / cache 적용하기 (아직 PlexDrive 는 못이기네요.)

category 서버/리눅스 서버 2017.12.22 21:38

안녕하세요. 도정진입니다. 이번에는 Rclone 베타 버전에 들어간 Cache 기능에 대해서 살펴 보도록 하겠습니다.

 

1. Rclone 베타 버전 다운로드


아래의 링크에서 다운을 받으시면 되겠습니다.

https://beta.rclone.org/

https://beta.rclone.org/rclone-beta-latest-linux-amd64.zip

https://beta.rclone.org/rclone-beta-latest-linux-arm.zip

 




2. 설치하기


아래의 설치 과정은 U5PVR 에서 진행한 것입니다.

 

root@AOL-Debian:~# wget https://beta.rclone.org/rclone-beta-latest-linux-arm.zip

--2017-12-22 20:31:38--  https://beta.rclone.org/rclone-beta-latest-linux-arm.zi                                     p

Resolving beta.rclone.org (beta.rclone.org)... 5.153.250.7, 2a02:24e0:8:61f9::1

Connecting to beta.rclone.org (beta.rclone.org)|5.153.250.7|:443... connected.

HTTP request sent, awaiting response... 200 OK

Length: 5787066 (5.5M) [application/zip]

Saving to: ‘rclone-beta-latest-linux-arm.zip’

 

clone-beta-latest-linux-arm.z  49%[=======================>                         ]   2.71M   335KB/s   eta 10s

 

압축을 풀고 실행 권한을 줍니다.

 

root@AOL-Debian:~# unzip rclone-beta-latest-linux-arm.zip

Archive:  rclone-beta-latest-linux-arm.zip

   creating: rclone-v1.38-247-g5683f740β-linux-arm/

  inflating: rclone-v1.38-247-g5683f740β-linux-arm/rclone

  inflating: rclone-v1.38-247-g5683f740β-linux-arm/README.txt

  inflating: rclone-v1.38-247-g5683f740β-linux-arm/README.html

  inflating: rclone-v1.38-247-g5683f740β-linux-arm/rclone.1

  inflating: rclone-v1.38-247-g5683f740β-linux-arm/git-log.txt

root@AOL-Debian:~# cd rclone-

-bash: cd: rclone-: No such file or directory

root@AOL-Debian:~# cd rclone-

rclone-beta-latest-linux-arm.zip       rclone-v1.38-247-g5683f740β-linux-arm/

root@AOL-Debian:~# cd rclone-

rclone-beta-latest-linux-arm.zip       rclone-v1.38-247-g5683f740β-linux-arm/

root@AOL-Debian:~# cd rclone-

rclone-beta-latest-linux-arm.zip       rclone-v1.38-247-g5683f740β-linux-arm/

root@AOL-Debian:~# cd rclone-v1.38-247-g5683f740β-linux-arm/

root@AOL-Debian:~/rclone-v1.38-247-g5683f740β-linux-arm# ls

git-log.txt  rclone  rclone.1  README.html  README.txt

root@AOL-Debian:~/rclone-v1.38-247-g5683f740β-linux-arm# chmod a+x rclone

 

다음으로 /usr/local/bin 에 복사합니다.

 

root@AOL-Debian:~/rclone-v1.38-247-g5683f740β-linux-arm# cp rclone /usr/local/bin

root@AOL-Debian:~/rclone-v1.38-247-g5683f740β-linux-arm# cd ~

root@AOL-Debian:~# rclone

2017/12/22 20:33:07 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults

Usage:

  rclone [flags]

  rclone [command]

 

Available Commands:

  authorize       Remote authorization.

  cachestats      Print cache stats for a remote

  cat             Concatenates any files and sends them to stdout.

  check           Checks the files in the source and destination match.

  cleanup         Clean up the remote if possible

  config          Enter an interactive configuration session.

  copy            Copy files from source to dest, skipping already copied

  copyto          Copy files from source to dest, skipping already copied

  cryptcheck      Cryptcheck checks the integrity of a crypted remote.

  cryptdecode     Cryptdecode returns unencrypted file names.

  dbhashsum       Produces a Dropbox hash file for all the objects in the path.

  dedupe          Interactively find duplicate files and delete/rename them.

  delete          Remove the contents of path.

  genautocomplete Output completion script for a given shell.

  gendocs         Output markdown docs for rclone to the directory supplied.

  help            Help about any command

  listremotes     List all the remotes in the config file.

  ls              List all the objects in the path with size and path.

  lsd             List all directories/containers/buckets in the path.

  lsjson          List directories and objects in the path in JSON format.

  lsl             List all the objects path with modification time, size and path.

  md5sum          Produces an md5sum file for all the objects in the path.

  mkdir           Make the path if it doesn't already exist.

  mount           Mount the remote as a mountpoint. **EXPERIMENTAL**

  move            Move files from source to dest.

  moveto          Move file or directory from source to dest.

  ncdu            Explore a remote with a text based user interface.

  obscure         Obscure password for use in the rclone.conf

  purge           Remove the path and all of its contents.

  rcat            Copies standard input to file on remote.

  rmdir           Remove the path if empty.

  rmdirs          Remove empty directories under the path.

  serve           Serve a remote over a protocol.

  sha1sum         Produces an sha1sum file for all the objects in the path.

  size            Prints the total size and number of objects in remote:path.

  sync            Make source and dest identical, modifying destination only.

  touch           Create new file or change file modification time.

  tree            List the contents of the remote in a tree like fashion.

  version         Show the version number.

 

Flags:

      --acd-templink-threshold int          Files >= this size will be downloaded via their tempLink. (default 9G)

      --acd-upload-wait-per-gb duration     Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)

      --ask-password                        Allow prompt for password for encrypted configuration. (default true)

      --auto-confirm                        If enabled, do not request console confirmation.

      --azureblob-chunk-size int            Upload chunk size. Must fit in memory. (default 4M)

      --azureblob-upload-cutoff int         Cutoff for switching to chunked upload (default 256M)

      --b2-chunk-size int                   Upload chunk size. Must fit in memory. (default 96M)

      --b2-hard-delete                      Permanently delete files on remote removal, otherwise hide files.

      --b2-test-mode string                 A flag string for X-Bz-Test-Mode header.

      --b2-upload-cutoff int                Cutoff for switching to chunked upload (default 190.735M)

      --b2-versions                         Include old versions in directory listings.

      --backup-dir string                   Make backups into hierarchy based in DIR.

      --bind string                         Local address to bind to for outgoing connections, IPv4, IPv6 or name.

      --box-upload-cutoff int               Cutoff for switching to multipart upload (default 50M)

      --buffer-size int                     Buffer size when copying files. (default 16M)

      --bwlimit BwTimetable                 Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.

      --cache-chunk-clean-interval string   Interval at which chunk cleanup runs (default "1m")

      --cache-chunk-no-memory               Disable the in-memory cache for storing chunks during streaming

      --cache-chunk-path string             Directory to cached chunk files (default "/root/.cache/rclone/cache-backend")

      --cache-chunk-size string             The size of a chunk (default "5M")

      --cache-db-path string                Directory to cache DB (default "/root/.cache/rclone/cache-backend")

      --cache-db-purge                      Purge the cache DB before

      --cache-dir string                    Directory rclone will use for caching. (default "/root/.cache/rclone")

      --cache-info-age string               How much time should object info be stored in cache (default "6h")

      --cache-read-retries int              How many times to retry a read from a cache storage (default 10)

      --cache-rps int                       Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)

      --cache-total-chunk-size string       The total size which the chunks can take up from the disk (default "10G")

      --cache-workers int                   How many workers should run in parallel to download chunks (default 4)

      --cache-writes                        Will cache file data on writes through the FS

      --checkers int                        Number of checkers to run in parallel. (default 8)

  -c, --checksum                            Skip based on checksum & size, not mod-time & size

      --config string                       Config file. (default "/root/.config/rclone/rclone.conf")

      --contimeout duration                 Connect timeout (default 1m0s)

  -L, --copy-links                          Follow symlinks and copy the pointed to item.

      --cpuprofile string                   Write cpu profile to file

      --crypt-show-mapping                  For all files listed show how the names encrypt.

      --delete-after                        When synchronizing, delete files on destination after transfering

      --delete-before                       When synchronizing, delete files on destination before transfering

      --delete-during                       When synchronizing, delete files during transfer (default)

      --delete-excluded                     Delete files on dest excluded from sync

      --disable string                      Disable a comma separated list of features.  Use help to see a list.

      --drive-auth-owner-only               Only consider files owned by the authenticated user.

      --drive-chunk-size int                Upload chunk size. Must a power of 2 >= 256k. (default 8M)

      --drive-formats string                Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")

      --drive-list-chunk int                Size of listing chunk 100-1000. 0 to disable. (default 1000)

      --drive-shared-with-me                Only show files that are shared with me

      --drive-skip-gdocs                    Skip google documents in all listings.

      --drive-trashed-only                  Only show files that are in the trash

      --drive-upload-cutoff int             Cutoff for switching to chunked upload (default 8M)

      --drive-use-trash                     Send files to the trash instead of deleting permanently. (default true)

      --dropbox-chunk-size int              Upload chunk size. Max 150M. (default 48M)

  -n, --dry-run                             Do a trial run with no permanent changes

      --dump string                         List of items to dump from:

      --dump-bodies                         Dump HTTP headers and bodies - may contain sensitive info

      --dump-headers                        Dump HTTP headers - may contain sensitive info

      --exclude stringArray                 Exclude files matching pattern

      --exclude-from stringArray            Read exclude patterns from file

      --exclude-if-present string           Exclude directories if filename is present

      --fast-list                           Use recursive list if available. Uses more memory but fewer transactions.

      --files-from stringArray              Read list of source-file names from file

  -f, --filter stringArray                  Add a file-filtering rule

      --filter-from stringArray             Read filtering patterns from a file

      --gcs-location string                 Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).

      --gcs-storage-class string            Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).

  -h, --help                                help for rclone

      --ignore-checksum                     Skip post copy check of checksums.

      --ignore-existing                     Skip all files that exist on destination

      --ignore-size                         Ignore size when skipping use mod-time or checksum.

  -I, --ignore-times                        Don't skip files that match size and time - transfer all files

      --immutable                           Do not modify files. Fail if existing files have been modified.

      --include stringArray                 Include files matching pattern

      --include-from stringArray            Read include patterns from file

      --local-no-unicode-normalization      Don't apply unicode normalization to paths and filenames

      --log-file string                     Log everything to this file

      --log-level string                    Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")

      --low-level-retries int               Number of low level retries to do. (default 10)

      --max-age string                      Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y

      --max-depth int                       If set limits the recursion depth to this. (default -1)

      --max-size int                        Don't transfer any file larger than this in k or suffix b|k|M|G (default off)

      --memprofile string                   Write memory profile to file

      --min-age string                      Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y

      --min-size int                        Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)

      --modify-window duration              Max time diff to be considered the same (default 1ns)

      --no-check-certificate                Do not verify the server SSL certificate. Insecure.

      --no-gzip-encoding                    Don't set Accept-Encoding: gzip.

      --no-traverse                         Don't traverse destination file system on copy.

      --no-update-modtime                   Don't update destination mod-time if files identical.

      --old-sync-method                     Deprecated - use --fast-list instead

  -x, --one-file-system                     Don't cross filesystem boundaries.

      --onedrive-chunk-size int             Above this size files will be chunked - must be multiple of 320k. (default 10M)

      --onedrive-upload-cutoff int          Cutoff for switching to chunked upload - must be <= 100MB (default 10M)

      --pcloud-upload-cutoff int            Cutoff for switching to multipart upload (default 50M)

  -q, --quiet                               Print as little stuff as possible

      --retries int                         Retry operations this many times if they fail (default 3)

      --s3-acl string                       Canned ACL used when creating buckets and/or storing objects in S3

      --s3-storage-class string             Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)

      --size-only                           Skip based on size only, not mod-time or checksum

      --skip-links                          Don't warn about skipped symlinks.

      --stats duration                      Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)

      --stats-log-level string              Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")

      --stats-unit string                   Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")

      --streaming-upload-cutoff int         Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)

      --suffix string                       Suffix for use with --backup-dir.

      --swift-chunk-size int                Above this size files will be chunked into a _segments container. (default 5G)

      --syslog                              Use Syslog for logging

      --syslog-facility string              Facility for syslog, eg KERN,USER,... (default "DAEMON")

      --timeout duration                    IO idle timeout (default 5m0s)

      --tpslimit float                      Limit HTTP transactions per second to this.

      --tpslimit-burst int                  Max burst of transactions for --tpslimit. (default 1)

      --track-renames                       When synchronizing, track file renames and do a server side move if possible

      --transfers int                       Number of file transfers to run in parallel. (default 4)

  -u, --update                              Skip files that are newer on the destination.

      --user-agent string                   Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38-247-g5683f740β")

  -v, --verbose count[=-1]                  Print lots more stuff (repeat for more)

  -V, --version                             Print the version number

 

Use "rclone [command] --help" for more information about a command.

Command not found.

root@AOL-Debian:~#

 




3. 저장소 생성하기 (google drive)


아래의 과정은 rclone 에서 구글 드라이브를 등록하는 과정을 담고 있습니다.


root@AOL-Debian:~# rclone config

2017/12/22 20:34:09 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults

No remotes found - make a new one

# 새로 하나 생성합니다.

n) New remote

s) Set configuration password

q) Quit config

n/s/q> n


# 마음에 드는 이름을 하나 입력합니다.

name> gdrive

Type of storage to configure.

Choose a number from below, or type in your own value

 1 / Amazon Drive

   \ "amazon cloud drive"

 2 / Amazon S3 (also Dreamhost, Ceph, Minio)

   \ "s3"

 3 / Backblaze B2

   \ "b2"

 4 / Box

   \ "box"

 5 / Cache a remote

   \ "cache"

 6 / Dropbox

   \ "dropbox"

 7 / Encrypt/Decrypt a remote

   \ "crypt"

 8 / FTP Connection

   \ "ftp"

 9 / Google Cloud Storage (this is not Google Drive)

   \ "google cloud storage"

10 / Google Drive

   \ "drive"

11 / Hubic

   \ "hubic"

12 / Local Disk

   \ "local"

13 / Microsoft Azure Blob Storage

   \ "azureblob"

14 / Microsoft OneDrive

   \ "onedrive"

15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)

   \ "swift"

16 / Pcloud

   \ "pcloud"

17 / QingCloud Object Storage

   \ "qingstor"

18 / SSH/SFTP Connection

   \ "sftp"

19 / Webdav

   \ "webdav"

20 / Yandex Disk

   \ "yandex"

21 / http Connection

   \ "http"


# 구글 드라이브를 연결할 것임으로 10을 입력합니다.

Storage> 10

Google Application Client Id - leave blank normally.

client_id>

Google Application Client Secret - leave blank normally.

client_secret>

Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.

service_account_file>

Remote config

Use auto config?

 * Say Y if not sure

 * Say N if you are working on a remote or headless machine or Y didn't work

y) Yes

n) No


# headless 상태임으로 n 을 입력합니다.

y/n> n

If your browser doesn't open automatically go to the following link: https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=202264815644.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=4a15283306538daf1c00589aed284a63


# 상기 링크로 이동합니다.

 

해당 링크로 이동해 보시면 아래의 창이 뜨고 그 다음으로 나오는 코드를 복사해 줍니다.



 

Log in and authorize rclone for access


# 아래에 상기에 나온 코드를 붙여넣습니다.

Enter verification code> 4/ (input your code)


# 팀드라이브로 하고 싶으시면 아래의 부분을 Y 로 해주시길 바랍니다.

Configure this as a team drive?

y) Yes

n) No

y/n> n

--------------------

[gdrive]

client_id =

client_secret =

service_account_file =

token =

--------------------

# 문제가 없으니 Y를 입력합니다.

y) Yes this is OK

e) Edit this remote

d) Delete this remote

y/e/d> y

Current remotes:

 

Name                 Type

====                 ====

gdrive               drive

 

e) Edit existing remote

n) New remote

d) Delete remote

r) Rename remote

c) Copy remote

s) Set configuration password

q) Quit config

e/n/d/r/c/s/q> q

root@AOL-Debian:~#

 




4. 캐쉬 저장소 생성하기

 

root@AOL-Debian:~# rclone config

Current remotes:

 

Name                 Type

====                 ====

gdrive               drive

 

e) Edit existing remote

n) New remote

d) Delete remote

r) Rename remote

c) Copy remote

s) Set configuration password

q) Quit config

e/n/d/r/c/s/q> n


# 마음에 드는 이름으로 생성합니다.

name> gdrive-cache

Type of storage to configure.

Choose a number from below, or type in your own value

 1 / Amazon Drive

   \ "amazon cloud drive"

 2 / Amazon S3 (also Dreamhost, Ceph, Minio)

   \ "s3"

 3 / Backblaze B2

   \ "b2"

 4 / Box

   \ "box"

 5 / Cache a remote

   \ "cache"

 6 / Dropbox

   \ "dropbox"

 7 / Encrypt/Decrypt a remote

   \ "crypt"

 8 / FTP Connection

   \ "ftp"

 9 / Google Cloud Storage (this is not Google Drive)

   \ "google cloud storage"

10 / Google Drive

   \ "drive"

11 / Hubic

   \ "hubic"

12 / Local Disk

   \ "local"

13 / Microsoft Azure Blob Storage

   \ "azureblob"

14 / Microsoft OneDrive

   \ "onedrive"

15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)

   \ "swift"

16 / Pcloud

   \ "pcloud"

17 / QingCloud Object Storage

   \ "qingstor"

18 / SSH/SFTP Connection

   \ "sftp"

19 / Webdav

   \ "webdav"

20 / Yandex Disk

   \ "yandex"

21 / http Connection

   \ "http"


# 캐쉬를 선택합니다.

Storage> 5

Remote to cache.

Normally should contain a ':' and a path, eg "myremote:path/to/dir",

"myremote:bucket" or maybe "myremote:" (not recommended).


# 캐쉬 경로를 입력합니다. 드라이브에 대한 경로입니다.

구글 드라이브 경로 /100.djjproject/video 폴더에만 적용하고 싶을 경우, gdrive:/100.djjproject/video 로 입력합니다.

remote> gdrive:/100.djjproject

Optional: The URL of the Plex server

plex_url>

Optional: The username of the Plex user

plex_username>

Optional: The password of the Plex user

y) Yes type in my own password

g) Generate random password

n) No leave this optional password blank

y/g/n> n

# 플렉스와 연동하여 캐쉬 기능을 개선할 수 있다고 합니다. 그러나 저는 그 용도가 아니라서 그냥 입력하지 않고 넘어갑니다.


# chunk size 부분입니다. 용량이 작을 수록 로딩이 빠르고 버퍼링 걸릴 확률이 높아집니다. 저는 5M으로 설정했습니다.

The size of a chunk. Lower value good for slow connections but can affect seamless reading.

Default: 5M

Choose a number from below, or type in your own value

 1 / 1MB

   \ "1m"

 2 / 5 MB

   \ "5M"

 3 / 10 MB

   \ "10M"

chunk_size> 2


# 캐쉬 유지 시간 항목 설정입니다. 한시간이 좋을것 같아 한시간으로 했습니다.

How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.

Accepted units are: "s", "m", "h".

Default: 6h

Choose a number from below, or type in your own value

 1 / 1 hour

   \ "1h"

 2 / 24 hours

   \ "24h"

 3 / 48 hours

   \ "48h"

info_age> 1


# 총 캐쉬 용량 설정 부분입니다. 50GB 로 설정합니다. (입맛에 따라 설정해 주세요.)

The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.

Default: 10G

Choose a number from below, or type in your own value

 1 / 500 MB

   \ "500M"

 2 / 1 GB

   \ "1G"

 3 / 10 GB

   \ "10G"

chunk_total_size> 50G

Remote config

--------------------

[gdrive-cache]

remote = gdrive:/100.djjproject

plex_url =

plex_username =

plex_password =

chunk_size = 5M

info_age = 1h

chunk_total_size = 50G

--------------------

y) Yes this is OK

e) Edit this remote

d) Delete this remote

y/e/d> y

Current remotes:

 

Name                 Type

====                 ====

gdrive               drive

gdrive-cache         cache

 

e) Edit existing remote

n) New remote

d) Delete remote

r) Rename remote

c) Copy remote

s) Set configuration password

q) Quit config

e/n/d/r/c/s/q> q

root@AOL-Debian:~#

 




5. 캐쉬 저장소를 통해 마운트 하기

 

아래의 주소를 참고해 보시면 명령을 줄 수 있는 항목에 대해서 나열한게 있습니다.


https://github.com/ncw/rclone/blob/master/docs/content/cache.md


아직까지 정확한 최적값을 찾지 못했습니다만, 차차 개선이 되지 않을까 싶습니다.

 

root@AOL-Debian:~# mkdir /mnt/media_rw/sda1/temp

mkdir: cannot create directory ‘/mnt/media_rw/sda1/temp’: File exists

root@AOL-Debian:~# rclone mount --allow-other --uid=1023 --gid=1023 --umask=000 \

> --cache-chunk-path=/mnt/media_rw/sda1/temp \

> --cache-db-path=/mnt/media_rw/sda1/temp \

> --cache-db-purge \

> --cache-chunk-size=5M \

> --cache-total-chunk-size=50G \

> --cache-chunk-clean-interval=5m \

> --cache-info-age=1h \

> --cache-read-retries=8 \

> --cache-workers=32 \

> --cache-rps=1000 \

> --cache-writes \

> gdrive-cache: /mnt/gdrive

2017/12/22 20:59:19 ERROR : <Cache DB> /mnt/media_rw/sda1/temp/gdrive-cache.db: failed to remove cache file: remove /mnt/media_rw/sda1/temp/gdrive-cache.db: no such file or directory


혹시 fuse 에러가 난다면 아래의 패키지를 설치해 주세요.


root@AOL-Debian:~# apt-get install fuse

Reading package lists... Done

Building dependency tree

Reading state information... Done

The following extra packages will be installed:

  libfuse2

The following NEW packages will be installed:

  fuse libfuse2

0 upgraded, 2 newly installed, 0 to remove and 4 not upgraded.

Need to get 194 kB of archives.

After this operation, 326 kB of additional disk space will be used.

Do you want to continue? [Y/n]

Get:1 http://httpredir.debian.org/debian/ jessie/main libfuse2 armhf 2.9.3-15+deb8u2 [125 kB]

Get:2 http://httpredir.debian.org/debian/ jessie/main fuse armhf 2.9.3-15+deb8u2 [69.1 kB]

Fetched 194 kB in 1s (142 kB/s)

Selecting previously unselected package libfuse2:armhf.

(Reading database ... 28450 files and directories currently installed.)

Preparing to unpack .../libfuse2_2.9.3-15+deb8u2_armhf.deb ...

Unpacking libfuse2:armhf (2.9.3-15+deb8u2) ...

Selecting previously unselected package fuse.

Preparing to unpack .../fuse_2.9.3-15+deb8u2_armhf.deb ...

Unpacking fuse (2.9.3-15+deb8u2) ...

Processing triggers for man-db (2.7.0.2-5) ...

Setting up libfuse2:armhf (2.9.3-15+deb8u2) ...

Setting up fuse (2.9.3-15+deb8u2) ...

Processing triggers for libc-bin (2.19-18+deb8u10) ...


일단 캐쉬 파일이 아래처럼 생성이 됩니다.


root@AOL-Debian:/mnt/media_rw/sda1/temp# ls -li -h

total 25M

6736 -rwxrwxrwx 1 aid_media_rw aid_media_rw 25M Dec  2 03:16 cache.bolt

8071 drwxrwxrwx 1 aid_media_rw aid_media_rw   0 Dec 22 20:59 gdrive-cache

8072 -rwxrwxrwx 1 aid_media_rw aid_media_rw 32K Dec 22 20:59 gdrive-cache.db


썩 마음에 들지는 않습니다 ㅎㅎ



댓글을 달아 주세요