victory的博客

长安一片月,万户捣衣声

0%

垃圾回收机制

Pyhton垃圾回收机制

1.引用计数
引用计数法的原理是: 每个对象维护一个ob_ref字段,用来记录该对象当前被引用的次数,每当新的引用指向该对象时,它的引用计数ob_ref加1,每当该对象的引用失效时计数ob_ref减1,一旦对象的引用计数为0,该对象立即被回收,对象占用的内存空间将被释放。
缺点: 无法解决循环引用
2.标记清除
Python采用了“标记-清除”(Mark and Sweep)算法,解决容器对象可能产生的循环引用问题。
标记阶段 遍历所有的对象,如果是可达的(reachable),也就是还有对象引用它,那么就标记该对象为可达;
清除阶段 再次遍历对象,如果发现某个对象没有标记为可达,则就将其回收。
优点: 解决了循环引用问题
缺点: 标记清除算法在执行很多次数后,程序的堆空间会产生一些小的内存碎片。
3.分代回收(假设新生代、中生代和老生代的threshold分别为700、10、10.)
· 每新增 701 个需要 GC 的对象,触发一次新生代 GC
· 每执行 11 次新生代 GC ,触发一次中生代 GC
· 每执行 11 次中生代 GC ,触发一次老生代 GC (老生代 GC 还受其他策略影响,频率更低)
· 执行某个生代 GC 前,年轻生代对象链表也移入该代,一起 GC
· 一个对象创建后,随着时间推移将被逐步移入老生代,回收频率逐渐降低

参考资料

Identifying Users and Activities with Cognitive Signal Processing from a Wearable Headband

Predictions

1.Predicting a person
2.Predicting an activity
3.Predicting a person as well as the activity

Contributions: propose a method of data representation-histograms representation

This paper shows that histograms of brain signals can be a very useful representation for data mining
activities. One of the primary advantages of the histograms is that they reduce the variable length
of signals to fixed length representations
.

ideas from reading this paper

combining activities predicting/emotion recognition to a system.

Cite this paper

Wiechert, Glavin & Triff, Matt & Liu, Zhixing & Yin, Zhicheng & Zhao, Shuai & Zhong, Ziyun & Zhaou, Runxing & Lingras, Pawan. (2016). Identifying users and activities with cognitive signal processing from a wearable headband. 129-136. 10.1109/ICCI-CC.2016.7862025.

Remaining Useful Life Prediction of Machining Tools by 1D-CNN LSTM Network

Contributions

use a 1D-CNN LSTM network architecture for machining tools RUL prediction

Problem Addressing

Traditional machine learning algorithms are sometimes difficult to extract hidden information that characterizes the degradation process of the tool.
deep learning methods tend to have better effects, as it has powerful adaptive learning and anti-noise ability, and it can automatically extract deep
features, which is more versatile than traditional machine learning methods.

Why CNN-LSTM?

CNN has a its capacity to automatically extract features and LSTM can effectively mine the hidden information in time series.

In fact, we can combine CNN’s high-dimensional feature extraction capacity and LSTM’s advantage on time series problems. After CNN extracts
features, we input them into the LSTM for training, then some improvements in accuracy and speed can be achieved.

For time-series problems, one dimensional convolutional neural network (1D-CNN) is more suitable than common convolution neural network. One of the
characteristics of the 1D-CNN is that for time-series data, the receptive field moves only in the direction of time, so the local inter-variable correlation can be extracted.

Some knowledge points learned

  1. Each convolutional layer consists of several convolutional units whose parameters are optimized by backpropagation algorithms.
  2. Pooling can effectively reduce the amount of data and increse the calculation speed.
  3. Each unit of RNN is a simple chain structure, it processes the input sequence {x1,x2,…,xT} sequentially to construct a corresponding sequence of hidden states {h1,h2,…,hT}.
  4. The main purpose of the dropout layer is to reduce over-fitting.

    Ideas

  5. compare the results of 1D-CNN,LSTM and 1D-CNN LSTM in own work.
  6. write own paper according to this reference(part of introduction and network description)
  7. Refer to the chart in the article

网络、模型、算法的区别

网络: 一种简单的网络结构,不包含任何权重参数。

模型: 设计一个网络后,在某些数据集上进行训练,得到一个包含权重参数的数据,称为模型。

算法: 在模型的基础上通过一些代码具体实现某些相关目的,这些代码以及模型文件等等资源被称为某算法。