Windows下面Android Studio提示“Can't use Subversion command line client: svn”

Windows下面Android Studio提示

如果安装TortoiseSVN的时候没有选中Command Line Client的话,可能会导致上面的问题。

解决方法就是重新安装Tortoise SVN,在安装的时候选上“command line client tools”即可。
TortoiseSVNCommandLineInstall

Android模拟器, push文件到system下文件夹权限,空间,SO文件没有自动安装的问题

  • 只读文件系统

需要把APK Push到模拟器下面的 /system/app 目录下面,报告

解决方法

  • 内存不足

原因众说纷纭,基本上大家都没怎么深究,有些镜像没有这个问题,有些就有问题。
解决方法:
不要使用Eclipse或者Android Studio 或者 AVD Manager的图形界面去启动模拟器,而是使用下面的命令:

  • 包含SO的APK启动崩溃,日志中显示无法找到SO文件

原因,Android 设计问题,如果system/app下面的APK包含SO文件,不会自动安装,需要手工PUSH 到 "/system/lib"目录下面。

  • Android 5.0之后,最好推送到/system/priv-app目录

5.0之后的Android,最好推送到/system/priv-app目录。

  • Android 5.0之后,推送到系统目录后,没有自动安装应用

原因,Android 5.0之后,没有实时监视/system/priv-app目录的变化,只有在系统启动的时候才会扫描一下(重启系统很慢,我们可以按照如下操作节约时间),因此需要手工通知一下(有时候需要修改一下权限才可以)。

Cura 14以及Cura 15 版本如何找到Cura 13中的打印机设置Temp,Jog,Speed等页面

一直使用商家提供的Cura 13进行3D打印,前段日子看到软件已经升级到了15.04版本(注意:最新的Cura 15 提供了两个版本,一个是官方的配合最新的3D打印机的版本,一个是社区版本,我们只能使用社区版本,才有下面的界面),于是下载了一个版本使用,结果发现找不到在Cura 13中调试打印机的页面了。如下图所示是Cura 13的设置页面,其中Jog主要是用来调试打印机的。Cura_13_Jog
Cura 13 之后的版本精简了打印界面,变成了这个样子,找不到原来的调试页面了。
Cura_15_Print
其实这个页面是存在的,只是被隐藏了而已,执行如下操作即可显示出来了。

  • 在“File”菜单中选择“Perferences...”菜单

Cura_15_Preferences

  • 在“Perferences”窗口中选择“Printing window type”下拉菜单,里面选择“Pronterface UI”

Cura_15_Preferences_Window

Cura_15_Preferences_Printing_Window_Type

  • 点击OK,关闭设置窗口后,点击“Print”菜单,就可以见到更加高级的打印设置界面了。

Cura_15_After_Window_Type

Cura_15_Advance_Printing_Window

Ubuntu 15.04 Btrfs分区拷贝文件提示 “拼接文件出错:设备上没有空间” (No space left on device)

在安装Ubuntu 15.04的时候,由于机器使用的是SSD硬盘,因此在建立HOME分区的时候选择了使用Btrfs格式作为分区格式。一直都是使用正常,直到今天,在向HOME分区拷贝一个16GB的文件的时候提示 “拼接文件出错:设备上没有空间” (英文系统可能会提示 “No space left on device”)。

  • 磁盘空间真的不足了?

使用"df"命令查询分区,发现所有分区都是足够的。如下图所示,空间足够使用,尤其是HOME分区,足足有40GB的空间。df_command_when_btrfs_no_space_error

  • 单个文件的大小太大了? 超过分区限制了?

维基百科搜索“btrfs”,简介中标明,最大文件尺寸 16 EiB,显然,16GB的文件,是不会超过这个限制的。

  • 分区中的文件数目太多?超过文件数量限制?

同样是维基百科,btrfs条目,标明 最大文件数量 2^64,显然,120GB的一个硬盘,即使是全部是一个字节的小文件,也达不到这个数字的。

  • Inode耗尽?

使用"df -i"命令查询Inode信息,发现好奇怪的现象,home所在的分区信息中Inode信息,不管是已经你使用的,还是可以使用的,还是总数,都是 0. 为什么呢?
df_i_command_when_btrfs_no_space 后来才知道,btrfs格式是不能使用df命令的,btrfs有自己的单独的命令查询.

btrfs_fi_df_i_command_when_btrfs_no_space

仔细观察一下输出结果,好奇怪,使用df 命令,我们查询到分区的大小在90GB左右,但是这里显示的文件的大小仅仅是43GB,而且已经使用了42.50GB,按照这个显示,自然是空间不足了,那么,我们的空间去了哪里?

  • 产生这个问题的根本原因

这个问题的产生,本质上是btrfs设计导致的,原因归咎于btrfs所采用的COW技术,这项技术需要一个比较大的保留存储空间,但是当空间不足的时候,本应减少保留空间,而显然,默认情况下,没有正确处理这种情况。这个问题在3.18版本之后得到比较好的解决。

  • 解决方法

对于 btrfs 3.18之前的版本来说,执行如下命令即可.

从3.18版本开始,这个命令是当空间不足出现的时候,默认执行的,很遗憾,15.04的btrfs版本号是3.17.

  • Btrfs的常用命令

显示btfs文件系统信息

btrfs磁盘文件检查(需要重启进入修复模式中执行)

  • 参考链接

Btrfs Problem_FAQ
Ubuntu thinks btrfs disk is full but its not

Ubuntu thinks btrfs disk is full but its not

由于国外网站经常打不开,因此内容直接复制到这里 原文链接

Btrfs is different from traditional filesystems. It is not just a layer that translates filenames into offsets on a block device, it is more of a layer that combines a traditional filesystem with LVM and RAID. And like LVM, it has the concept of allocating space on the underlying device, but not actually using it for files.

A traditional filesystem is divided into files and free space. It is easy to calculate how much space is used or free:

Btrfs combines LVM, RAID and a filesystem. The drive is divided into subvolumes, each dynamically sized and replicated:

The diagram shows the partition being divided into two subvolumes and metadata. One of the subvolumes is duplicated (RAID1), so there are two copies of every file on the device. Now we not only have the concept of how much space is free at the filesystem layer, but also how much space is free at the block layer (drive partition) below it. Space is also taken up by metadata.

When considering free space in Btrfs, we have to clarify which free space we are talking about - the block layer, or the file layer? At the block layer, data is allocated in 1GB chunks, so the values are quite coarse, and might not bear any relation to the amount of space that the user can actually use. At the file layer, it is impossible to report the amount of free space because the amount of space depends on how it is used. In the above example, a file stored on the replicated subvolume @raid1 will take up twice as much space as the same file stored on the @homesubvolume. Snapshots only store copies of files that have been subsequently modified. There is no longer a 1-1 mapping between a file as the user sees it, and a file as stored on the drive.

You can check the free space at the block layer with btrfs filesystem show / and the free space at the subvolume layer with btrfs filesystem df /


For this mounted subvolume, df reports a drive of total size 38G, with 12G used, and 13M free. 100% of the available space has been used. Remember that the total size 38G is divided between different subvolumes and metadata - it is not exclusive to this subvolume.

Each line shows the total space and the used space for a different data type and replication type. The values shown are data stored rather than raw bytes on the drive, so if you're using RAID-1 or RAID-10 subvolumes, the amount of raw storage used is double the values you can see here.

The first column shows the type of item being stored (Data, System, Metadata). The second column shows whether a single copy of each item is stored (single), or whether two copies of each item are stored (DUP). Two copies are used for sensitive data, so there is a backup if one copy is corrupted. For DUP lines, the used value has to be doubled to get the amount of space used on the actual drive (because btrfs fs df reports data stored, not drive space used). The third and fourth columns show the total and used space. There is no free column, since the amount of "free space" is dependent on how it is used.

The thing that stands out about this drive is that you have 9.47GiB of space allocated for ordinary files of which you have used 9.46GiB - this is why you are getting No space left on device errors. You have 13.88GiB of space allocated for duplicated metadata, of which you have used 1.13GiB. Since this metadata is DUP duplicated, it means that 27.76GiB of space has been allocated on the actual drive, of which you have used 2.26GiB. Hence 25.5GiB of the drive is not being used, but at the same time is not available for files to be stored in. This is the "Btrfs huge metadata allocated"problem. To try and correct this, run btrfs balance start -m /. The -m parameter tells btrfs to only re-balance metadata.

A similar problem is running out of metadata space. If the output had shown that the metadata were actually full (used value close to total), then the solution would be to try and free up almost empty (<5% used) data blocks using the command btrfs balance start -dusage=5 /. These free blocks could then be reused to store metadata.

For more details see the Btrfs FAQs:

Fixing Btrfs Filesystem Full Problems

由于原作的地址打不开链接,因此直接把Google的快照内容复制到这里。原作链接

Clear space now

If you have historical snapshots, the quickest way to get space back so that you can look at the filesystem and apply better fixes and cleanups is to drop the oldest historical snapshots.

Two things to note:

  • If you have historical snapshots as described here , delete the oldest ones first, and wait (see below). However if you just just deleted 100GB, and replaced it with another 100GB which failed to fully write, giving you out of space, all your snapshots will have to be deleted to clear the blocks of that old file you just removed to make space for the new one (actually if you know exactly what file it is, you can go in all your snapshots and manually delete it, but in the common case it'll be multiple files and you won't know which ones, so you'll have to drop all your snapshots before you get the space back).
  • After deleting snapshots, it can take a minute or more for btrfs fi show to show the space freed . Do not be too impatient, run btrfs fi show in a loop and see if the number changes every minute. If it does not, carry on and delete other snapshots or look at rebalancing.

Note that even in the cases described below, you may have to clear one snapshot or more to make space before btrfs balance can run. As a corollary, btrfs can get in states where it's hard to get it out of the 'no space' state it's in. As a result, even if you don't need snapshot, keeping at least one around to free up space should you hit that mis-feature/bug, can be handy

Is your filesystem really full? Mis-balanced data chunks

Look at filesystem show output:

Only about 50% of the space is used (441 out of 865GB), but the device is 88% full (751 out of 865MB). Unfortunately it's not uncommon for a btrfs device to fill up due to the fact that it does not rebalance chunks (3.18+ has started freeing empty chunks, which is a step in the right direction).

In the case above, because the filesystem is only 55% full, I can ask balance to rewrite all chunks that have less than 55% space used. Rebalancing those blocks actually means taking the data in those blocks, and putting it in fuller blocks so that you end up being able to free the less used blocks.
This means the bigger the -dusage value, the more work balance will have to do (ie taking fuller and fuller blocks and trying to free them up by putting their data elsewhere). Also, if your FS is 55% full, using -dusage=55 is ok, but there isn't a 1 to 1 correlation and you'll likely be ok with a smaller dusage number, so start small and ramp up as needed.

# Follow the progress along with: legolas:~# while :; do btrfs balance status -v /mnt/btrfs_pool1; sleep 60; done Balance on '/mnt/btrfs_pool1' is running 10 out of about 315 chunks balanced (22 considered), 97% left Dumping filters: flags 0x1, state 0x1, force is off DATA (flags 0x2): balancing, usage=55 Balance on '/mnt/btrfs_pool1' is running 16 out of about 315 chunks balanced (28 considered), 95% left Dumping filters: flags 0x1, state 0x1, force is off DATA (flags 0x2): balancing, usage=55 (...)

When it's over, the filesystem now looks like this (note devid used is now 513GB instead of 751GB):

Before you ask, yes, btrfs should do this for you on its own, but currently doesn't as of 3.14.

Is your filesystem really full? Misbalanced metadata

Unfortunately btrfs has another failure case where the metadata space can fill up. When this happens, even though you have data space left, no new files will be writeable.

In the example below, you can see Metadata DUP 9.5GB out of 10GB. Btrfs keeps 0.5GB for itself, so in the case above, metadata is full and prevents new writes.

One suggested way is to force a full rebalance, and in the example below you can see metadata goes back down to 7.39GB after it's done. Yes, there again, it would be nice if btrfs did this on its own. It will one day (some if it is now in 3.18).

Sometimes, just using -dusage=0 is enough to rebalance metadata (this is now done automatically in 3.18 and above), but if it's not enough, you'll have to increase the number.

Balance cannot run because the filesystem is full

One trick to get around this is to add a device (even a USB key will do) to your btrfs filesystem. This should allow balance to start, and then you can remove the device with btrfs device delete when the balance is finished.
It's also been said on the list that kernel 3.14 can fix some balancing issues that older kernels can't, so give that a shot if your kernel is old.

Note, it's even possible for a filesystem to be full in a way that you cannot even delete snapshots to free space. This shows how you would work around it:

<<<< BAD

<<< GOOD

Misc Balance Resources

For more info, please read:

Ubuntu下的截图编辑软件Shutter

使用习惯了Windows下的QQ截图MAC下面的Skitch,能够非常方便的进行窗口的截图,并且使用箭头,矩形等编辑工具,对图片进行编辑,注释。Ubuntu下的搜索了好长时间才找到功能差不多的图片备注软件Shutter

安装命令

也可以在Ubuntu软件中心中搜索 Shutter 来安装。

简单的使用参考下面的图片:
Shutter_Main
Shutter_Edit

ERROR: Non-debuggable application installed on the target device. Please re-install the debuggable version!

注意,本文的描述,必须完全满足下面的条件,并且要确定已经在编译ndk的时候,已经使用了 NDK_DEBUG=1。并且打包APK的时候已经包含gdbserver,gdb.setup。

最近在调试NDK的时候,发现一个比较棘手的问题,一直报告错误“ERROR: Non-debuggable application installed on the target device. Please re-install the debuggable version! ”如下面所示:

要求ndk-gdb输出详细的执行过程如下:

可以观察到几个奇怪的地方,比如,报错的地方提示"ERROR: Could not find gdbserver binary under ./libs/" ,正常情况下,应该是"./libs/armeabi-v7a","./libs/armeabi"之类的东西,并且“Compatible device ABI: ”部分,是不应该输出为空的情况的。这说明,没有正确的读取到APK中的关于CPU相关的数据。另外,注意这句话“WARNING: APP_PLATFORM android-19 is larger than android:minSdkVersion 14 in ./AndroidManifest.xml

庆幸的是,Google提供了Python版本的ndk-gdb 因此,我们在Ubuntu 15.04下面使用Spyder来跟踪调试,观察到底哪里出了问题。
设置如下:
Configure_Spyder
调整需要跟踪调试的目录到工程中正常执行ndk-gdb所在的目录:
Configure_Path
跟踪之后发现问题如下图所示:
ndk_gdb_py_bug

也就是说,当AndroidManifest.xml中设置的版本号“<uses-sdk android:minSdkVersion="14" />跟在 Application.mk 中设置的版本号“APP_PLATFORM := android-19”,当两者不一致的时候,会导致返回的APP_ABIS不是正常情况下的"[armeabi,armeabi-v7a]"这种形式的返回,而是返回了错误信息的详情,而这个仅仅是个警告而已,也就是说是Google 的一个BUG.

了解了原因,就比较好解决问题了,只要两者修改成为一致就可以了

TortoiseSVN客户端重新设置用户名和密码

在第一次使用TortoiseSVN从服务器CheckOut的时候,会要求输入用户名和密码,这时输入框下面有个选项是保存认证信息,如果选了这个选项,那么以后就不用每次都输入一遍用户名密码了。
不过,如果后来在服务器端修改了用户名密码,则再次检出时就会出错,而且这个客户端很弱智,出错之后不会自动跳出用户名密码输入框让人更新,我找了半天也没找到修改这个用户名密码的地方。
最终,找到两种解决办法:
办法一:在TortoiseSVN的设置对话框中,选择“已保存数据”,在“认证数据”那一行点击“清除”按钮,清楚保存的认证数据,再检出的时候就会重新跳出用户名密码输入框。2012032209211811
如果方法一不起作用,则可以采用方法二:
Tortoise的用户名密码等认证信息都是缓存在客户端文件系统的这个目录:

删除auth下面的所有文件夹,重新连接远程服务器进行检出,对话框就会出现!

参考链接 TortoiseSVN客户端重新设置用户名和密码

Linux下查看so导出函数列表

1.只查看导出函数

2.查看更详细的二进制信息

注意使用readelf读取函数列表的时候,如果函数名比较长,可能会在显示的时候被截断,如果只查看导出函数,建议使用 objdump命令。

3.查看链接的库