Linux vmi2545633.contaboserver.net 6.1.0-32-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.129-1 (2025-03-06) x86_64
Apache/2.4.62 (Debian)
Server IP : 127.0.0.1 & Your IP : 127.0.0.1
Domains :
Cant Read [ /etc/named.conf ]
User : www-data
Terminal
Auto Root
Create File
Create Folder
Localroot Suggester
Backdoor Destroyer
Readme
/
usr /
local /
lib /
python3.11 /
dist-packages /
joblib /
Delete
Unzip
Name
Size
Permission
Date
Action
__pycache__
[ DIR ]
drwxr-xr-x
2025-04-08 17:00
externals
[ DIR ]
drwxr-xr-x
2025-04-08 17:00
test
[ DIR ]
drwxr-xr-x
2025-04-08 17:00
__init__.py
5.01
KB
-rw-r--r--
2025-04-08 17:00
_cloudpickle_wrapper.py
417
B
-rw-r--r--
2025-04-08 17:00
_dask.py
13
KB
-rw-r--r--
2025-04-08 17:00
_memmapping_reducer.py
27.43
KB
-rw-r--r--
2025-04-08 17:00
_multiprocessing_helpers.py
1.88
KB
-rw-r--r--
2025-04-08 17:00
_parallel_backends.py
24.89
KB
-rw-r--r--
2025-04-08 17:00
_store_backends.py
16.29
KB
-rw-r--r--
2025-04-08 17:00
_utils.py
2.03
KB
-rw-r--r--
2025-04-08 17:00
backports.py
5.24
KB
-rw-r--r--
2025-04-08 17:00
compressor.py
19.3
KB
-rw-r--r--
2025-04-08 17:00
disk.py
4.29
KB
-rw-r--r--
2025-04-08 17:00
executor.py
5.02
KB
-rw-r--r--
2025-04-08 17:00
func_inspect.py
13.87
KB
-rw-r--r--
2025-04-08 17:00
hashing.py
10.29
KB
-rw-r--r--
2025-04-08 17:00
logger.py
5.33
KB
-rw-r--r--
2025-04-08 17:00
memory.py
45.45
KB
-rw-r--r--
2025-04-08 17:00
numpy_pickle.py
26.26
KB
-rw-r--r--
2025-04-08 17:00
numpy_pickle_compat.py
8.35
KB
-rw-r--r--
2025-04-08 17:00
numpy_pickle_utils.py
8.52
KB
-rw-r--r--
2025-04-08 17:00
parallel.py
82.6
KB
-rw-r--r--
2025-04-08 17:00
pool.py
14.08
KB
-rw-r--r--
2025-04-08 17:00
testing.py
3.02
KB
-rw-r--r--
2025-04-08 17:00
Save
Rename
"""Joblib is a set of tools to provide **lightweight pipelining in Python**. In particular: 1. transparent disk-caching of functions and lazy re-evaluation (memoize pattern) 2. easy simple parallel computing Joblib is optimized to be **fast** and **robust** on large data in particular and has specific optimizations for `numpy` arrays. It is **BSD-licensed**. ==================== =============================================== **Documentation:** https://joblib.readthedocs.io **Download:** https://pypi.python.org/pypi/joblib#downloads **Source code:** https://github.com/joblib/joblib **Report issues:** https://github.com/joblib/joblib/issues ==================== =============================================== Vision -------- The vision is to provide tools to easily achieve better performance and reproducibility when working with long running jobs. * **Avoid computing the same thing twice**: code is often rerun again and again, for instance when prototyping computational-heavy jobs (as in scientific development), but hand-crafted solutions to alleviate this issue are error-prone and often lead to unreproducible results. * **Persist to disk transparently**: efficiently persisting arbitrary objects containing large data is hard. Using joblib's caching mechanism avoids hand-written persistence and implicitly links the file on disk to the execution context of the original Python object. As a result, joblib's persistence is good for resuming an application status or computational job, eg after a crash. Joblib addresses these problems while **leaving your code and your flow control as unmodified as possible** (no framework, no new paradigms). Main features ------------------ 1) **Transparent and fast disk-caching of output value:** a memoize or make-like functionality for Python functions that works well for arbitrary Python objects, including very large numpy arrays. Separate persistence and flow-execution logic from domain logic or algorithmic code by writing the operations as a set of steps with well-defined inputs and outputs: Python functions. Joblib can save their computation to disk and rerun it only if necessary:: >>> from joblib import Memory >>> cachedir = 'your_cache_dir_goes_here' >>> mem = Memory(cachedir) >>> import numpy as np >>> a = np.vander(np.arange(3)).astype(float) >>> square = mem.cache(np.square) >>> b = square(a) # doctest: +ELLIPSIS ______________________________________________________________________... [Memory] Calling square... square(array([[0., 0., 1.], [1., 1., 1.], [4., 2., 1.]])) _________________________________________________...square - ...s, 0.0min >>> c = square(a) >>> # The above call did not trigger an evaluation 2) **Embarrassingly parallel helper:** to make it easy to write readable parallel code and debug it quickly:: >>> from joblib import Parallel, delayed >>> from math import sqrt >>> Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10)) [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0] 3) **Fast compressed Persistence**: a replacement for pickle to work efficiently on Python objects containing large data ( *joblib.dump* & *joblib.load* ). .. >>> import shutil ; shutil.rmtree(cachedir) """ # PEP0440 compatible formatted version, see: # https://www.python.org/dev/peps/pep-0440/ # # Generic release markers: # X.Y # X.Y.Z # For bugfix releases # # Admissible pre-release markers: # X.YaN # Alpha release # X.YbN # Beta release # X.YrcN # Release Candidate # X.Y # Final release # # Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer. # 'X.Y.dev0' is the canonical version of 'X.Y.dev' # __version__ = '1.4.2' import os from .memory import Memory from .memory import MemorizedResult from .memory import register_store_backend from .memory import expires_after from .logger import PrintTime from .logger import Logger from .hashing import hash from .numpy_pickle import dump from .numpy_pickle import load from .compressor import register_compressor from .parallel import Parallel from .parallel import delayed from .parallel import cpu_count from .parallel import register_parallel_backend from .parallel import parallel_backend from .parallel import parallel_config from .parallel import effective_n_jobs from ._cloudpickle_wrapper import wrap_non_picklable_objects __all__ = ['Memory', 'MemorizedResult', 'PrintTime', 'Logger', 'hash', 'dump', 'load', 'Parallel', 'delayed', 'cpu_count', 'effective_n_jobs', 'register_parallel_backend', 'parallel_backend', 'expires_after', 'register_store_backend', 'register_compressor', 'wrap_non_picklable_objects', 'parallel_config'] # Workaround issue discovered in intel-openmp 2019.5: # https://github.com/ContinuumIO/anaconda-issues/issues/11294 os.environ.setdefault("KMP_INIT_AT_FORK", "FALSE")