稀牛lesson1
python 基礎
目錄
- Python介紹
- 基本數據結構
- 變量和表達式
- 字符竄
- 列表
- 判定語句
- 循環語句
- 集合/set,元組/tuble,字典/dictionary
python 簡介
- python R
- 一天內能寫Python是有可能的
- 流行程度:深度學習,人工智能
深度學習
google:tensorflow
facebook:pytorch+caffe2
Amazone:mxnet
機器學習
scikit-learn numpy pandas xgboost/LightGBM
大數據
hadoop(map-reduce) JAVA spark scala
都是用Python去實現的
- 因為數據科學家,用數據驅動的方式去解決實際生活中的各種問題。但是,花了大量時間去coding
- 但理想的狀況是,你把主要的精力放在分析數據,了解數據和解決問題本身上
1. 尋求幫助
- Python有內置的說明文檔 help
- dir
import pandas as pd
# 數據科學的比賽,阿里,京東,會有比賽,其實可以多去參加的
help(pd)
Help on package pandas:
NAME
pandas
DESCRIPTION
pandas - a powerful data analysis and manipulation library for Python
=====================================================================
**pandas** is a Python package providing fast, flexible, and expressive data
structures designed to make working with "relational" or "labeled" data both
easy and intuitive. It aims to be the fundamental high-level building block for
doing practical, **real world** data analysis in Python. Additionally, it has
the broader goal of becoming **the most powerful and flexible open source data
analysis / manipulation tool available in any language**. It is already well on
its way toward this goal.
Main Features
-------------
Here are just a few of the things that pandas does well:
- Easy handling of missing data in floating point as well as non-floating
point data
- Size mutability: columns can be inserted and deleted from DataFrame and
higher dimensional objects
- Automatic and explicit data alignment: objects can be explicitly aligned
to a set of labels, or the user can simply ignore the labels and let
`Series`, `DataFrame`, etc. automatically align the data for you in
computations
- Powerful, flexible group by functionality to perform split-apply-combine
operations on data sets, for both aggregating and transforming data
- Make it easy to convert ragged, differently-indexed data in other Python
and NumPy data structures into DataFrame objects
- Intelligent label-based slicing, fancy indexing, and subsetting of large
data sets
- Intuitive merging and joining data sets
- Flexible reshaping and pivoting of data sets
- Hierarchical labeling of axes (possible to have multiple labels per tick)
- Robust IO tools for loading data from flat files (CSV and delimited),
Excel files, databases, and saving/loading data from the ultrafast HDF5
format
- Time series-specific functionality: date range generation and frequency
conversion, moving window statistics, moving window linear regressions,
date shifting and lagging, etc.
PACKAGE CONTENTS
_libs (package)
_version
api (package)
compat (package)
computation (package)
conftest
core (package)
errors (package)
formats (package)
io (package)
json
lib
parser
plotting (package)
stats (package)
testing
tests (package)
tools (package)
tseries (package)
tslib
types (package)
util (package)
SUBMODULES
_hashtable
_lib
_tslib
offsets
DATA
IndexSlice = <pandas.core.indexing._IndexSlice object>
NaT = NaT
__docformat__ = 'restructuredtext'
datetools = <module 'pandas.core.datetools' from '/home/bog/...ython3....
describe_option = <pandas.core.config.CallableDynamicDoc object>
get_option = <pandas.core.config.CallableDynamicDoc object>
json = <module 'pandas.json' from '/home/bog/anaconda3/lib/python3.6/s...
lib = <module 'pandas.lib' from '/home/bog/anaconda3/lib/python3.6/sit...
options = <pandas.core.config.DictWrapper object>
parser = <module 'pandas.parser' from '/home/bog/anaconda3/lib/python3...
plot_params = {'xaxis.compat': False}
reset_option = <pandas.core.config.CallableDynamicDoc object>
set_option = <pandas.core.config.CallableDynamicDoc object>
tslib = <module 'pandas.tslib' from '/home/bog/anaconda3/lib/python3.6...
VERSION
0.20.3
FILE
/home/bog/anaconda3/lib/python3.6/site-packages/pandas/__init__.py
import tensorflow as tf
help(tf)
Help on package tensorflow:
NAME
tensorflow
DESCRIPTION
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
PACKAGE CONTENTS
contrib (package)
core (package)
examples (package)
python (package)
tensorboard (package)
tools (package)
SUBMODULES
app
compat
errors
estimator
flags
gfile
graph_util
image
layers
logging
losses
metrics
nn
python_io
pywrap_tensorflow
resource_loader
saved_model
sdca
sets
spectral
summary
sysconfig
test
train
user_ops
DATA
COMPILER_VERSION = '4.8.4'
GIT_VERSION = 'v1.1.0-rc0-61-g1ec6ed5'
GRAPH_DEF_VERSION = 21
GRAPH_DEF_VERSION_MIN_CONSUMER = 0
GRAPH_DEF_VERSION_MIN_PRODUCER = 0
QUANTIZED_DTYPES = frozenset({tf.qint8, tf.quint8, tf.qint32, tf.qint1...
VERSION = '1.1.0'
__compiler_version__ = '4.8.4'
__git_version__ = 'v1.1.0-rc0-61-g1ec6ed5'
bfloat16 = tf.bfloat16
bool = tf.bool
complex128 = tf.complex128
complex64 = tf.complex64
contrib = <tensorflow._LazyContribLoader object>
double = tf.float64
float16 = tf.float16
float32 = tf.float32
float64 = tf.float64
half = tf.float16
int16 = tf.int16
int32 = tf.int32
int64 = tf.int64
int8 = tf.int8
newaxis = None
qint16 = tf.qint16
qint32 = tf.qint32
qint8 = tf.qint8
quint16 = tf.quint16
quint8 = tf.quint8
resource = tf.resource
string = tf.string
uint16 = tf.uint16
uint8 = tf.uint8
VERSION
1.1.0
FILE
/home/bog/anaconda3/lib/python3.6/site-packages/tensorflow/__init__.py
tab有自動補全功能 ,比如tf. 后面按一個tab鍵,他會把后面可以接上的函數列出來
import sklearn
dir(sklearn.clone)
['__annotations__',
'__call__',
'__class__',
'__closure__',
'__code__',
'__defaults__',
'__delattr__',
'__dict__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__get__',
'__getattribute__',
'__globals__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__kwdefaults__',
'__le__',
'__lt__',
'__module__',
'__name__',
'__ne__',
'__new__',
'__qualname__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__']
dir(tf)
['AggregationMethod',
'Assert',
'AttrValue',
'COMPILER_VERSION',
'ConditionalAccumulator',
'ConditionalAccumulatorBase',
'ConfigProto',
'DType',
'DeviceSpec',
'Dimension',
'Event',
'FIFOQueue',
'FixedLenFeature',
'FixedLenSequenceFeature',
'FixedLengthRecordReader',
'GIT_VERSION',
'GPUOptions',
'GRAPH_DEF_VERSION',
'GRAPH_DEF_VERSION_MIN_CONSUMER',
'GRAPH_DEF_VERSION_MIN_PRODUCER',
'Graph',
'GraphDef',
'GraphKeys',
'GraphOptions',
'HistogramProto',
'IdentityReader',
'IndexedSlices',
'InteractiveSession',
'LogMessage',
'NameAttrList',
'NoGradient',
'NodeDef',
'NotDifferentiable',
'OpError',
'Operation',
'OptimizerOptions',
'PaddingFIFOQueue',
'Print',
'PriorityQueue',
'QUANTIZED_DTYPES',
'QueueBase',
'RandomShuffleQueue',
'ReaderBase',
'RegisterGradient',
'RunMetadata',
'RunOptions',
'Session',
'SessionLog',
'SparseConditionalAccumulator',
'SparseFeature',
'SparseTensor',
'SparseTensorValue',
'Summary',
'TFRecordReader',
'Tensor',
'TensorArray',
'TensorInfo',
'TensorShape',
'TextLineReader',
'VERSION',
'VarLenFeature',
'Variable',
'VariableScope',
'WholeFileReader',
'_LazyContribLoader',
'__builtins__',
'__cached__',
'__compiler_version__',
'__doc__',
'__file__',
'__git_version__',
'__loader__',
'__name__',
'__package__',
'__path__',
'__spec__',
'__version__',
'abs',
'accumulate_n',
'acos',
'add',
'add_check_numerics_ops',
'add_n',
'add_to_collection',
'all_variables',
'app',
'arg_max',
'arg_min',
'argmax',
'argmin',
'as_dtype',
'as_string',
'asin',
'assert_equal',
'assert_greater',
'assert_greater_equal',
'assert_integer',
'assert_less',
'assert_less_equal',
'assert_negative',
'assert_non_negative',
'assert_non_positive',
'assert_none_equal',
'assert_positive',
'assert_proper_iterable',
'assert_rank',
'assert_rank_at_least',
'assert_type',
'assert_variables_initialized',
'assign',
'assign_add',
'assign_sub',
'atan',
'batch_to_space',
'batch_to_space_nd',
'betainc',
'bfloat16',
'bincount',
'bitcast',
'bool',
'boolean_mask',
'broadcast_dynamic_shape',
'broadcast_static_shape',
'case',
'cast',
'ceil',
'check_numerics',
'cholesky',
'cholesky_solve',
'clip_by_average_norm',
'clip_by_global_norm',
'clip_by_norm',
'clip_by_value',
'compat',
'complex',
'complex128',
'complex64',
'concat',
'cond',
'confusion_matrix',
'conj',
'constant',
'constant_initializer',
'container',
'contrib',
'control_dependencies',
'convert_to_tensor',
'convert_to_tensor_or_indexed_slices',
'convert_to_tensor_or_sparse_tensor',
'cos',
'count_nonzero',
'count_up_to',
'create_partitioned_variables',
'cross',
'cumprod',
'cumsum',
'decode_base64',
'decode_csv',
'decode_json_example',
'decode_raw',
'delete_session_tensor',
'depth_to_space',
'dequantize',
'deserialize_many_sparse',
'device',
'diag',
'diag_part',
'digamma',
'div',
'divide',
'double',
'dynamic_partition',
'dynamic_stitch',
'edit_distance',
'einsum',
'encode_base64',
'equal',
'erf',
'erfc',
'errors',
'estimator',
'exp',
'expand_dims',
'expm1',
'extract_image_patches',
'eye',
'fake_quant_with_min_max_args',
'fake_quant_with_min_max_args_gradient',
'fake_quant_with_min_max_vars',
'fake_quant_with_min_max_vars_gradient',
'fake_quant_with_min_max_vars_per_channel',
'fake_quant_with_min_max_vars_per_channel_gradient',
'fft',
'fft2d',
'fft3d',
'fill',
'fixed_size_partitioner',
'flags',
'float16',
'float32',
'float64',
'floor',
'floor_div',
'floordiv',
'floormod',
'foldl',
'foldr',
'gather',
'gather_nd',
'get_collection',
'get_collection_ref',
'get_default_graph',
'get_default_session',
'get_local_variable',
'get_seed',
'get_session_handle',
'get_session_tensor',
'get_variable',
'get_variable_scope',
'gfile',
'global_norm',
'global_variables',
'global_variables_initializer',
'gradients',
'graph_util',
'greater',
'greater_equal',
'group',
'half',
'hessians',
'histogram_fixed_width',
'identity',
'ifft',
'ifft2d',
'ifft3d',
'igamma',
'igammac',
'imag',
'image',
'import_graph_def',
'initialize_all_tables',
'initialize_all_variables',
'initialize_local_variables',
'initialize_variables',
'int16',
'int32',
'int64',
'int8',
'invert_permutation',
'is_finite',
'is_inf',
'is_nan',
'is_non_decreasing',
'is_numeric_tensor',
'is_strictly_increasing',
'is_variable_initialized',
'layers',
'lbeta',
'less',
'less_equal',
'lgamma',
'lin_space',
'linspace',
'load_file_system_library',
'load_op_library',
'local_variables',
'local_variables_initializer',
'log',
'log1p',
'logging',
'logical_and',
'logical_not',
'logical_or',
'logical_xor',
'losses',
'make_template',
'map_fn',
'matching_files',
'matmul',
'matrix_band_part',
'matrix_determinant',
'matrix_diag',
'matrix_diag_part',
'matrix_inverse',
'matrix_set_diag',
'matrix_solve',
'matrix_solve_ls',
'matrix_transpose',
'matrix_triangular_solve',
'maximum',
'meshgrid',
'metrics',
'min_max_variable_partitioner',
'minimum',
'mod',
'model_variables',
'moving_average_variables',
'multinomial',
'multiply',
'name_scope',
'negative',
'newaxis',
'nn',
'no_op',
'no_regularizer',
'norm',
'not_equal',
'one_hot',
'ones',
'ones_initializer',
'ones_like',
'op_scope',
'orthogonal_initializer',
'pad',
'parallel_stack',
'parse_example',
'parse_single_example',
'parse_single_sequence_example',
'parse_tensor',
'placeholder',
'placeholder_with_default',
'polygamma',
'pow',
'py_func',
'python_io',
'pywrap_tensorflow',
'qint16',
'qint32',
'qint8',
'qr',
'quantize_v2',
'quantized_concat',
'quint16',
'quint8',
'random_crop',
'random_gamma',
'random_normal',
'random_normal_initializer',
'random_poisson',
'random_shuffle',
'random_uniform',
'random_uniform_initializer',
'range',
'rank',
'read_file',
'real',
'realdiv',
'reciprocal',
'reduce_all',
'reduce_any',
'reduce_join',
'reduce_logsumexp',
'reduce_max',
'reduce_mean',
'reduce_min',
'reduce_prod',
'reduce_sum',
'register_tensor_conversion_function',
'report_uninitialized_variables',
'required_space_to_batch_paddings',
'reset_default_graph',
'reshape',
'resource',
'resource_loader',
'reverse',
'reverse_sequence',
'reverse_v2',
'rint',
'round',
'rsqrt',
'saturate_cast',
'saved_model',
'scalar_mul',
'scan',
'scatter_add',
'scatter_div',
'scatter_mul',
'scatter_nd',
'scatter_nd_add',
'scatter_nd_sub',
'scatter_nd_update',
'scatter_sub',
'scatter_update',
'sdca',
'segment_max',
'segment_mean',
'segment_min',
'segment_prod',
'segment_sum',
'self_adjoint_eig',
'self_adjoint_eigvals',
'sequence_mask',
'serialize_many_sparse',
'serialize_sparse',
'set_random_seed',
'setdiff1d',
'sets',
'shape',
'shape_n',
'sigmoid',
'sign',
'sin',
'size',
'slice',
'space_to_batch',
'space_to_batch_nd',
'space_to_depth',
'sparse_add',
'sparse_concat',
'sparse_fill_empty_rows',
'sparse_mask',
'sparse_matmul',
'sparse_maximum',
'sparse_merge',
'sparse_minimum',
'sparse_placeholder',
'sparse_reduce_sum',
'sparse_reduce_sum_sparse',
'sparse_reorder',
'sparse_reset_shape',
'sparse_reshape',
'sparse_retain',
'sparse_segment_mean',
'sparse_segment_sqrt_n',
'sparse_segment_sum',
'sparse_softmax',
'sparse_split',
'sparse_tensor_dense_matmul',
'sparse_tensor_to_dense',
'sparse_to_dense',
'sparse_to_indicator',
'sparse_transpose',
'spectral',
'split',
'sqrt',
'square',
'squared_difference',
'squeeze',
'stack',
'stop_gradient',
'strided_slice',
'string',
'string_join',
'string_split',
'string_to_hash_bucket',
'string_to_hash_bucket_fast',
'string_to_hash_bucket_strong',
'string_to_number',
'substr',
'subtract',
'summary',
'svd',
'sysconfig',
'tables_initializer',
'tan',
'tanh',
'tensordot',
'test',
'tile',
'to_bfloat16',
'to_double',
'to_float',
'to_int32',
'to_int64',
'trace',
'train',
'trainable_variables',
'transpose',
'truediv',
'truncated_normal',
'truncated_normal_initializer',
'truncatediv',
'truncatemod',
'tuple',
'uint16',
'uint8',
'uniform_unit_scaling_initializer',
'unique',
'unique_with_counts',
'unsorted_segment_max',
'unsorted_segment_sum',
'unstack',
'user_ops',
'variable_axis_size_partitioner',
'variable_op_scope',
'variable_scope',
'variables_initializer',
'verify_tensor_all_finite',
'where',
'while_loop',
'write_file',
'zeros',
'zeros_initializer',
'zeros_like',
'zeta']
python的運算
- +/-/*//
6/4
1.5
6//4
1
4**0.5
2.0
4%3
1
python基本數據類型,變量,運算,表達式
3.變量
基本數據類型:
- int 整數
- float 浮點數
- str 字符竄
- bool型
- type()
X = 12
type(X)
int
y = -5.32
type(y)
float
z = 'bog5d'
type(z)
str
z[:3]
'bog'
a ="bog\n5d"
print(a)
bog
5d
a1 = 'bog\n5d'
print(a1)
bog
5d
d = True
type(d)
bool
e = pd.DataFrame()
type(e)
pandas.core.frame.DataFrame
表達式
- Python 會用表達式
### 切片
a = 'bog5diloveyou'
a
'bog5diloveyou'
a[4]
'd'
a[-3]
'y'
a[1:4]
#第二位到第五位
#左閉右開,是切片原則
'og5'
a[-6:-2]
'ovey'
a[2:]
'g5diloveyou'
a[:5]
'bog5d'
字符竄函數
dir(str)
['__add__',
'__class__',
'__contains__',
'__delattr__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__getattribute__',
'__getitem__',
'__getnewargs__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__iter__',
'__le__',
'__len__',
'__lt__',
'__mod__',
'__mul__',
'__ne__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__rmod__',
'__rmul__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'capitalize',
'casefold',
'center',
'count',
'encode',
'endswith',
'expandtabs',
'find',
'format',
'format_map',
'index',
'isalnum',
'isalpha',
'isdecimal',
'isdigit',
'isidentifier',
'islower',
'isnumeric',
'isprintable',
'isspace',
'istitle',
'isupper',
'join',
'ljust',
'lower',
'lstrip',
'maketrans',
'partition',
'replace',
'rfind',
'rindex',
'rjust',
'rpartition',
'rsplit',
'rstrip',
'split',
'splitlines',
'startswith',
'strip',
'swapcase',
'title',
'translate',
'upper',
'zfill']
a.upper()
'BOG5DILOVEYOU'
a.endswith('u')
True
b= " "+a+"-yes!!!"
b
' bog5diloveyou-yes!!!'
b=b.strip()
c = "我 愛 達州"
c.split(" ")
['我', '愛', '達州']
b.find('5d')
3
list列表
list是Python中的數據結構,表示一連串的數據的集合(有序的)
names = ["bobi","eva","joe","二狗","李雷","hanmeimei"]
#元素是任何玩意兒,都可以
type(list)
type
type(names)
list
dir(list)
['__add__',
'__class__',
'__contains__',
'__delattr__',
'__delitem__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__getattribute__',
'__getitem__',
'__gt__',
'__hash__',
'__iadd__',
'__imul__',
'__init__',
'__init_subclass__',
'__iter__',
'__le__',
'__len__',
'__lt__',
'__mul__',
'__ne__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__reversed__',
'__rmul__',
'__setattr__',
'__setitem__',
'__sizeof__',
'__str__',
'__subclasshook__',
'append',
'clear',
'copy',
'count',
'extend',
'index',
'insert',
'pop',
'remove',
'reverse',
'sort']
len(names)
6
mixed = ['bobi',2,3.14,[1,2,34]]
len(mixed)
4
### 切片
mixed[1]
2
mixed[3]
[1, 2, 34]
mixed[3][2]
34
mixed[2:]
[3.14, [1, 2, 34]]
split<==>join(對一堆字符串組成的list去做的)
names
['bobi', 'eva', 'joe', '二狗', '李雷', 'hanmeimei']
mixed
['bobi', 2, 3.14, [1, 2, 34]]
" ## ".join(names)
'bobi ## eva ## joe ## 二狗 ## 李雷 ## hanmeimei'
"+".join(names)
'bobi+eva+joe+二狗+李雷+hanmeimei'
append(往尾巴上去追加)
mixed
['bobi', 2, 3.14, [1, 2, 34], '36', '36', '36']
mixed.extend([54,66,99])
mixed
['bobi', 2, 3.14, [1, 2, 34], '36', '36', '36', 54, 66, 99]
mixed.index('36')
4
mixed.pop()
99
mixed
['bobi', 2, 3.14, [1, 2, 34], '36', '36', '36', 54, 66]
流程控制
條件判斷 if else
#判斷一個老人
age = 25
if age>60:
print("一把老骨頭啊")
elif age>35:
print("形狀之年啊")
else:
print("小年輕")
小年輕
for name in names:
print(name)
bobi
eva
joe
二狗
李雷
hanmeimei
for i,name in enumerate(names):
print(i,name)
0 bobi
1 eva
2 joe
3 二狗
4 李雷
5 hanmeimei
i = 0
while i<10:
print(i)
i+=1
0
1
2
3
4
5
6
7
8
9
i = 0
while True:
print(i)
i+=1
if i>6:
break
0
1
2
3
4
5
6
i = 0
while True:
i+=1
if i%3== 0:
continue
print(i)
if i>6:
break
1
2
4
5
7
for name in names:
print('my name is'+name)
my name isbobi
my name iseva
my name isjoe
my name is二狗
my name is李雷
my name ishanmeimei
###列表推到式
['my name is '+name for name in names]
['my name is bobi',
'my name is eva',
'my name is joe',
'my name is 二狗',
'my name is 李雷',
'my name is hanmeimei']
num_list = list(range(1,16))
[num**2 for num in num_list]
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225]
[num**2 for num in num_list if num%2==1]
[1, 9, 25, 49, 81, 121, 169, 225]
[num**2 for num in num_list if (num%2==1 and num<5)]
[1, 9]
set/ 集合
num_list = num_list*2
num_list
[1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15]
set(num_list)
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}
字典/dictionary
有時候,我們需要一下key—value這樣的數據。鍵值對
legs ={'spider:8','pig:4','duck:2'}
legs.{'pig'}
File "<ipython-input-80-9796474270bf>", line 1
legs.{'pig'}
^
SyntaxError: invalid syntax
list(legs.key())
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-81-e76d2b520d0a> in <module>()
----> 1 list(legs.key())
AttributeError: 'set' object has no attribute 'key'
字典推到式
my_list = list(range(10))
my_list
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
{num:num**3 for num in my_list}
{0: 0, 1: 1, 2: 8, 3: 27, 4: 64, 5: 125, 6: 216, 7: 343, 8: 512, 9: 729}
###高級排序
my_list = [5,1,4,2,3]
my_list.sort()#成員函數
my_list
[1, 2, 3, 4, 5]
tmp_list=[5,4,3,1,2]
sorted(tmp_list)#外部函數,不會改變數據序列本身
[1, 2, 3, 4, 5]
tmp_list
[5, 4, 3, 1, 2]
sorted(tmp_list,reverse=False)
[1, 2, 3, 4, 5]
strs =['ccc','aaaa','eeeeeee','kdjehd']
sorted(strs)
['aaaa', 'ccc', 'eeeeeee', 'kdjehd']
sorted(strs,key=len)#根據key對元素做處理后的結果排序
['ccc', 'aaaa', 'kdjehd', 'eeeeeee']
tmp_strs= ['aa','CC','BB','ZZ']
sorted(tmp_strs,key=str.lower)
['aa', 'BB', 'CC', 'ZZ']
函數
- def 關鍵字
- 括號包含參數列表
- 語句需要縮進
def my_sum(a,b):
return (a+b+1)**(1/3)
my_sum(3,4)
2.0
def fib(n):
a,b = 0,1
for i in range(n):
print(a,end=" ")
a,b=b,a+b
print()
fib(15)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377
def my_mul(x,y=3):
return x*y
my_mul(4,5)
20
my_mul(4)
12
文件的讀寫
!ls
2017-11-25 19-16-05屏幕截圖.png 波哥linux創建的 文檔 稀牛學院課1.ipynb
!2017-11-25\ 19-16-05屏幕截圖.png
/bin/sh: 1: 2017-11-25 19-16-05屏幕截圖.png: not found
!pwd
/home/bog/圖片
help(open)
Help on built-in function open in module io:
open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None)
Open file and return a stream. Raise IOError upon failure.
file is either a text or byte string giving the name (and the path
if the file isn't in the current working directory) of the file to
be opened or an integer file descriptor of the file to be
wrapped. (If a file descriptor is given, it is closed when the
returned I/O object is closed, unless closefd is set to False.)
mode is an optional string that specifies the mode in which the file
is opened. It defaults to 'r' which means open for reading in text
mode. Other common values are 'w' for writing (truncating the file if
it already exists), 'x' for creating and writing to a new file, and
'a' for appending (which on some Unix systems, means that all writes
append to the end of the file regardless of the current seek position).
In text mode, if encoding is not specified the encoding used is platform
dependent: locale.getpreferredencoding(False) is called to get the
current locale encoding. (For reading and writing raw bytes use binary
mode and leave encoding unspecified.) The available modes are:
========= ===============================================================
Character Meaning
--------- ---------------------------------------------------------------
'r' open for reading (default)
'w' open for writing, truncating the file first
'x' create a new file and open it for writing
'a' open for writing, appending to the end of the file if it exists
'b' binary mode
't' text mode (default)
'+' open a disk file for updating (reading and writing)
'U' universal newline mode (deprecated)
========= ===============================================================
The default mode is 'rt' (open for reading text). For binary random
access, the mode 'w+b' opens and truncates the file to 0 bytes, while
'r+b' opens the file without truncation. The 'x' mode implies 'w' and
raises an `FileExistsError` if the file already exists.
Python distinguishes between files opened in binary and text modes,
even when the underlying operating system doesn't. Files opened in
binary mode (appending 'b' to the mode argument) return contents as
bytes objects without any decoding. In text mode (the default, or when
't' is appended to the mode argument), the contents of the file are
returned as strings, the bytes having been first decoded using a
platform-dependent encoding or using the specified encoding if given.
'U' mode is deprecated and will raise an exception in future versions
of Python. It has no effect in Python 3. Use newline to control
universal newlines mode.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to select
line buffering (only usable in text mode), and an integer > 1 to indicate
the size of a fixed-size chunk buffer. When no buffering argument is
given, the default buffering policy works as follows:
* Binary files are buffered in fixed-size chunks; the size of the buffer
is chosen using a heuristic trying to determine the underlying device's
"block size" and falling back on `io.DEFAULT_BUFFER_SIZE`.
On many systems, the buffer will typically be 4096 or 8192 bytes long.
* "Interactive" text files (files for which isatty() returns True)
use line buffering. Other text files use the policy described above
for binary files.
encoding is the name of the encoding used to decode or encode the
file. This should only be used in text mode. The default encoding is
platform dependent, but any encoding supported by Python can be
passed. See the codecs module for the list of supported encodings.
errors is an optional string that specifies how encoding errors are to
be handled---this argument should not be used in binary mode. Pass
'strict' to raise a ValueError exception if there is an encoding error
(the default of None has the same effect), or pass 'ignore' to ignore
errors. (Note that ignoring encoding errors can lead to data loss.)
See the documentation for codecs.register or run 'help(codecs.Codec)'
for a list of the permitted encoding error strings.
newline controls how universal newlines works (it only applies to text
mode). It can be None, '', '\n', '\r', and '\r\n'. It works as
follows:
* On input, if newline is None, universal newlines mode is
enabled. Lines in the input can end in '\n', '\r', or '\r\n', and
these are translated into '\n' before being returned to the
caller. If it is '', universal newline mode is enabled, but line
endings are returned to the caller untranslated. If it has any of
the other legal values, input lines are only terminated by the given
string, and the line ending is returned to the caller untranslated.
* On output, if newline is None, any '\n' characters written are
translated to the system default line separator, os.linesep. If
newline is '' or '\n', no translation takes place. If newline is any
of the other legal values, any '\n' characters written are translated
to the given string.
If closefd is False, the underlying file descriptor will be kept open
when the file is closed. This does not work when a file name is given
and must be True in that case.
A custom opener can be used by passing a callable as *opener*. The
underlying file descriptor for the file object is then obtained by
calling *opener* with (*file*, *flags*). *opener* must return an open
file descriptor (passing os.open as *opener* results in functionality
similar to passing None).
open() returns a file object whose type depends on the mode, and
through which the standard file operations such as reading and writing
are performed. When open() is used to open a file in a text mode ('w',
'r', 'wt', 'rt', etc.), it returns a TextIOWrapper. When used to open
a file in a binary mode, the returned class varies: in read binary
mode, it returns a BufferedReader; in write binary and append binary
modes, it returns a BufferedWriter, and in read/write mode, it returns
a BufferedRandom.
It is also possible to use a string or bytearray as a file for both
reading and writing. For strings StringIO can be used like a file
opened in a text mode, and for bytes a BytesIO can be used like a file
opened in a binary mode.
f = open('./無標題文檔','r')
f.read()
"sorted(tmp_list)\ndsjfsjofdsfjsdajfdsa\nfjdsiojfds\n4\nkds'\n"
f.close()
f = open('./無標題文檔','r')
f.readlines()
['sorted(tmp_list)\n',
'dsjfsjofdsfjsdajfdsa\n',
'fjdsiojfds\n',
'4\n',
"kds'\n"]
f.close()
for line in open('./無標題文檔','r'):
print(len(line.strip()))
16
20
10
1
4
作業-統計詞頻
給你一個文件,統計這個文檔中出現了的單詞,出現的次數。
正則表達式
tem_str = "有誰喜歡郭金明的書,那么韓寒呢"
tem_str.find("郭金明")
4
匹配的是一個模式,我讓你找到所有郵箱字符串
email_str = "my email is wangbo8805@gmail.com,his email is test@test.net,goodbye"
import re #正則表達式
#\d 數字
# \2 字母,數字,下劃線。。。。
#\s 所有的空格 空白 或者tab 換行
# + 貪婪匹配
# * 匹配0次或者若干次
match=re.search(r'[\w.-]+@[\w.-]+',email_str)
if match:
print(match.group())
wangbo8805@gmail.com
hanxiaoyang.ml@gmail.com寒曉陽的郵箱
emails = re.findall(r'[\w.-]+@[\w.-]+',email_str)
if emails:
for email in emails:
print(email)
wangbo8805@gmail.com
test@test.net