=================================================== Nitime: timeseries analysis for neuroscience data =================================================== Nitime contains a core of numerical algorithms for time-series analysis both in the time and spectral domains, a set of container objects to represent time-series, and auxiliary objects that expose a high level interface to the numerical machinery and make common analysis tasks easy to express with compact and semantically clear code. Website ======= Current information can always be found at the NIPY website is located here:: http://nipy.org/nitime Mailing Lists ============= Please see the developer's list here:: http://mail.scipy.org/mailman/listinfo/nipy-devel Code ==== You can find our sources and single-click downloads: * `Main repository`_ on Github. * Documentation_ for all releases and current development tree. * Download as a tar/zip file the `current trunk`_. * Downloads of all `available releases`_. .. _main repository: http://github.com/nipy/nitime .. _Documentation: http://nipy.org/nitime .. _current trunk: http://github.com/nipy/nitime/archives/master .. _available releases: http://github.com/nipy/nitime/downloads License information =================== Nitime is licensed under the terms of the new BSD license. See the file "LICENSE" for information on the history of this software, terms & conditions for usage, and a DISCLAIMER OF ALL WARRANTIES. All trademarks referenced herein are property of their respective holders. Copyright (c) 2006-2011, NIPY Developers All rights reserved.
Timeseries analysis for neuroscience data
Overview
Comments
-
MIssing plots in granger_fmri.html
There might be something not right about the last two figures here
http://nipy.org/nitime/examples/granger_fmri.html
Those are missing the body of the graph, which is all white.
I believe this is in the file
doc/examples/granger_fmri.py
. -
fail to estimate dpss_windows for long signals
I have times series with 166800 samples (raw MEG data).
alg.dpss_windows(166800, 4, 8)
fails. However, in Matlab it works.
any idea of how to fix this?
-
Latest release breaking Python 2.7, 3.4 (SyntaxError)
In nipype, tests are breaking in Python 2.7, 3.4, due to the
@
operator:./../../virtualenv/python2.7.15/lib/python2.7/site-packages/py/_path/local.py:668: in pyimport __import__(modname) nipype/interfaces/nitime/__init__.py:5: in <module> from .analysis import (CoherenceAnalyzerInputSpec, CoherenceAnalyzerOutputSpec, nipype/interfaces/nitime/analysis.py:28: in <module> package_check('nitime') nipype/utils/misc.py:180: in package_check mod = __import__(pkg_name) ../../../virtualenv/python2.7.15/lib/python2.7/site-packages/nitime/__init__.py:26: in <module> from . import algorithms ../../../virtualenv/python2.7.15/lib/python2.7/site-packages/nitime/algorithms/__init__.py:62: in <module> from nitime.algorithms.event_related import * E File "/home/travis/virtualenv/python2.7.15/lib/python2.7/site-packages/nitime/algorithms/event_related.py", line 60 E h = np.array(linalg.pinv(X.T @ X) @ X.T @ y.T) E ^ E SyntaxError: invalid syntax
-
pip install problem with numpy
This error happens with numpy installed. It also happens when building in readthedocs projects having nitime as a requirement and autodocs enabled.
Collecting numpy (from nitime) Using cached numpy-1.10.4-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl Collecting nitime Downloading nitime-0.6.tar.gz (10.0MB) 100% |████████████████████████████████| 10.0MB 55kB/s Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 20, in <module> File "/private/var/folders/hw/7bn8tjn96vd58k7sg1ptt82c0000gn/T/pip-build-l6NHiN/nitime/setup.py", line 17, in <module> exec(f.read()) File "<string>", line 2, in <module> File "nitime/__init__.py", line 26, in <module> from . import algorithms File "nitime/algorithms/__init__.py", line 55, in <module> from nitime.algorithms.spectral import * File "nitime/algorithms/spectral.py", line 10, in <module> import numpy as np ImportError: No module named numpy ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/hw/7bn8tjn96vd58k7sg1ptt82c0000gn/T/pip-build-l6NHiN/nitime
-
Lazy imports
Here's a set of patches that make nitime imports faster and cleaner by deferring the matplotlib and scipy imports (in a few places) until they are actually needed.
Mostly, I want this so that I can
import nitime.timeseries
without pulling in matplotlib and scipy.Without this PR wall time for imports is 1.2 - 1.9 seconds:
16:[email protected](master)$ time python -c "import nitime" real 0m1.879s user 0m0.870s sys 0m0.415s
With this PR:
16:[email protected](lazy-imports)$ time python -c "import nitime" real 0m0.425s user 0m0.242s sys 0m0.111s
Which is pretty damn good, considering that on this system:
16:[email protected](lazy-imports)$ time python -c "import numpy" real 0m0.385s user 0m0.206s sys 0m0.108s
(In particular - there's about a 300ms advantage of lazyloading numpy.testing.nosetools!)
Here are some import analyses - using this lazy loading saves us from importing ~500 modules up front.
In [2]: import sys; snn = set([k for k in sys.modules]); len(snn) Out[2]: 461 # set of modules - no nitime In [3]: import nitime; sn = set([k for k in sys.modules]); len(sn) Out[3]: 629 # set of modules with nitime (this is 1102 without lazy loading) In [4]: nitime.test(); snt = set([k for k in sys.modules]); len(sn) Out[4]: 1540 # set of modules after nitime.test() (1542 without lazy loading)
The functionality of lazyimports.LazyImport is generic enough to allow the lazily imported module to act as the module in almost every way (tab completion, introspection for docstrings and sources) except reloading is not supported.
For skeptics - add a line such as
bogus.parameter : True
to the end of your ~/.matplotlib/matplotlibrc - which will cause aBad Key
user warning from matplotlib complaint on import. Then ::In [1]: import sys In [2]: import matplotlib.mlab as mlab Bad key "bogus.parameter" on line 374 in /home/pi/.matplotlib/matplotlibrc. You probably need to get an updated matplotlibrc file from http://matplotlib.sf.net/_static/matplotlibrc or from the matplotlib source distribution In [3]: mlab Out[3]: <module 'matplotlib.mlab' from '.../site-packages/matplotlib/mlab.pyc'> In [4]: mlab. Display all 107 possibilities? (y or n)n In [6]: [sys.modules.pop(k) for k in sys.modules.keys() if 'matplotlib' in k]; In [7]: from nitime.lazyimports import mlab In [8]: mlab Bad key "bogus.parameter" on line 374 in /home/pi/.matplotlib/matplotlibrc. You probably need to get an updated matplotlibrc file from http://matplotlib.sf.net/_static/matplotlibrc or from the matplotlib source distribution Out[8]: <module 'matplotlib.mlab' from '.../site-packages/matplotlib/mlab.pyc'> In [9]: mlab. Display all 107 possibilities? (y or n)n
In particular - note that for the lazy case, the actual import of matplotlib did not happen until after
In [8]
- which imported the module and called the repr on it.As a side note (on making
reload()
work): The following code (a bit more convoluted than what's in this PR) gets closer to being able to reload, but I haven't been able to figure out what machinery is missing to make it fully work. Perhaps @fperez has an idea, but it's not a big deal.import nitime.descriptors as desc from types import ModuleType as module class LazyImport(module): def __init__(self, modname): #module.__init__(self,modname,"foo") self.__lazyname__= modname self.__name__= modname @desc.auto_attr # one-time property def __lazyimported__(self): name = module.__getattribute__(self,'__lazyname__') return __import__(name, fromlist=name.split('.')) def __getattribute__(self,x): return module.__getattribute__(self,'__lazyimported__').__getattribute__(x) def __repr__(self): return module.__getattribute__(self,'__lazyimported__').__repr__()
-
Memory error of GrangerAnalyzer
Dear all, When I run a script like that:
>>>sampling_rate=1000 >>>freq_idx_G Out[7]: array([40, 41]) >>>G.frequencies.shape[0] Out[8]: 513 >>>g1 = np.mean(G.causality_xy[:, :, freq_idx_G], -1)
it met the following memory error (freq_idx_G=):
--------------------------------------------------------------------------- MemoryError Traceback (most recent call last) <ipython-input-6-b3dd332ebe13> in <module>() ----> 1 g1 = np.mean(G.causality_xy[:, :, freq_idx_G], -1) /home/qdong/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/nitime/descriptors.pyc in __get__(self, obj, type) 138 # Errors in the following line are errors in setting a 139 # OneTimeProperty --> 140 val = self.getter(obj) 141 142 setattr(obj, self.name, val) /home/qdong/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/nitime/analysis/granger.pyc in causality_xy(self) 202 @desc.setattr_on_read 203 def causality_xy(self): --> 204 return self._dict2arr('gc_xy') 205 206 @desc.setattr_on_read /home/qdong/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/nitime/analysis/granger.pyc in _dict2arr(self, key) 191 arr = np.empty((self._n_process, 192 self._n_process, --> 193 self.frequencies.shape[0])) 194 195 arr.fill(np.nan) MemoryError:
Can anyone give me some tips? Thanks!
-
sphinx docs won't build (related to lazyimports?)
Running Sphinx v1.1.2 /Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/matplotlib/init.py:908: UserWarning: This call to matplotlib.use() has no effect because the the backend has already been chosen; matplotlib.use() must be called before pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time.
if warn: warnings.warn(_use_error_msg) WARNING: extension 'ipython_console_highlighting' has no setup() function; is it really a Sphinx extension module? loading pickled environment... not yet created building [html]: targets for 71 source files that are out of date updating environment: 71 added, 0 changed, 0 removed /Users/arokem/projects/nitime/doc/sphinxext/docscrape.py:117: UserWarning: Unknown section Note warn("Unknown section %s" % key) /Users/arokem/projects/nitime/doc/sphinxext/docscrape.py:117: UserWarning: Unknown section Warning warn("Unknown section %s" % key) /Users/arokem/projects/nitime/doc/sphinxext/docscrape.py:117: UserWarning: Unknown section Example warn("Unknown section %s" % key) reading sources... [ 29%] api/generated/nitime.lazyimports
Exception occurred: File "/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/sphinx/environment.py", line 828, in read_doc pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL) PicklingError: Can't pickle <type 'module'>: attribute lookup builtin.module failed The full traceback has been saved in /var/folders/sf/3b6q6p1d7518rpb4882pzsxw0000gn/T/sphinx-err-9UpBlB.log, if you want to report the issue to the developers. Please also report this if it was a user error, so that a better error message can be provided next time. Either send bugs to the mailing list at http://groups.google.com/group/sphinx-dev/, or report them in the tracker at http://bitbucket.org/birkenfeld/sphinx/issues/. Thanks! make: *** [htmlonly] Error 1 -
Reorganization
This branch contains a major reorganization of algorithms.py as a sub-module of the library. The main idea is to change the layout of the library, making it slightly more developer-friendly. I am asking for a review of this, in the hopes of getting comments on the general structure. The idea is to adopt a similar structure for timeseries.py, utils.py, analysis.py and viz.py.
It also includes completion of 100% test coverage for almost all of the algorithms sub-module. Some of it is just smoke testing, but I have added quite a bit of actual tests for spectral and coherence, as well as for autoregressive.
The one bit that is still not entirely covered by the tests is algorithms.wavelet. I am not so sure how to use these functions. Maybe someone with a better idea (Kilian?) can take a look and add tests for this sub-module?
-
Fix according changed sphinx API
Hi,
The build fails with the following due to change in Sphinx's API change
add_directive
function. Log below:$ sphinx-build doc html-no-exec Running Sphinx v3.2.0 /home/nilesh/ups/nitime/doc/conf.py:34: MatplotlibDeprecationWarning: The mpl_toolkits.axes_grid module was deprecated in Matplotlib 2.1 and will be removed two minor releases later. Use mpl_toolkits.axes_grid1 and mpl_toolkits.axisartist, which provide the same functionality instead. __import__(package, fromlist=parts) WARNING: while setting up extension ipython_console_highlighting: extension 'ipython_console_highlighting' has no setup() function; is it really a Sphinx extension module? Exception occurred: File "/home/nilesh/ups/nitime/doc/sphinxext/only_directives.py", line 40, in setup app.add_directive('htmlonly', html_only_directive, True, (0, 0, 0)) TypeError: add_directive() takes from 3 to 4 positional arguments but 5 were given The full traceback has been saved in /tmp/sphinx-err-z9obztzm.log, if you want to report the issue to the developers. Please also report this if it was a user error, so that a better error message can be provided next time. A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
This is an attempt to fix it along with adding in sphinx build to travis tests so as to ensure this works OK with future commits as well.
-
Deed description of document 'fmri_timeseries.csv'
Under nitime/data/, there is a fmri_timeseries.csv file with 31 different areas. Can you give me more information about this file? For example, how did the data come from, or where did it come from?
Thank you!
-
Timearray math
with this PR, adding and subtracting values which aren't TimeArrays first converts them and gives them the unit of the time array. For example
In [1]: import nitime
In [2]: nitime.TimeArray(1) + 1 Out[2]: 2.0 s
In [3]: nitime.TimeArray(1, time_unit='ms') + 1 Out[3]: 2.0 ms
In [4]: nitime.TimeArray(1, time_unit='ms') + 1 + nitime.TimeArray(1) Out[4]: 1002.0 ms
In [5]: a = nitime.TimeArray(1)
In [6]: a Out[6]: 1.0 s
In [7]: a.convert_unit('ms')
In [8]: a Out[8]: 1000.0 ms
In [9]: a+1 Out[9]: 1001.0 ms
-
`test_FilterAnalyzer` fails with scipy 1.8.0
Hi,
In maintaining the NixOS package for
nitime
we noticed that the testtest_FilterAnalyzer
fails once we bumpscipy
to 1.8.0:_____________________________ test_FilterAnalyzer ______________________________ def test_FilterAnalyzer(): """Testing the FilterAnalyzer """ t = np.arange(np.pi / 100, 10 * np.pi, np.pi / 100) fast = np.sin(50 * t) + 10 slow = np.sin(10 * t) - 20 fast_mean = np.mean(fast) slow_mean = np.mean(slow) fast_ts = ts.TimeSeries(data=fast, sampling_rate=np.pi) slow_ts = ts.TimeSeries(data=slow, sampling_rate=np.pi) #Make sure that the DC is preserved f_slow = nta.FilterAnalyzer(slow_ts, ub=0.6) f_fast = nta.FilterAnalyzer(fast_ts, lb=0.6) npt.assert_almost_equal(f_slow.filtered_fourier.data.mean(), slow_mean, decimal=2) npt.assert_almost_equal(f_slow.filtered_boxcar.data.mean(), slow_mean, decimal=2) npt.assert_almost_equal(f_slow.fir.data.mean(), slow_mean) npt.assert_almost_equal(f_slow.iir.data.mean(), slow_mean) npt.assert_almost_equal(f_fast.filtered_fourier.data.mean(), 10) npt.assert_almost_equal(f_fast.filtered_boxcar.data.mean(), 10, decimal=2) npt.assert_almost_equal(f_fast.fir.data.mean(), 10) npt.assert_almost_equal(f_fast.iir.data.mean(), 10) #Check that things work with a two-channel time-series: T2 = ts.TimeSeries(np.vstack([fast, slow]), sampling_rate=np.pi) f_both = nta.FilterAnalyzer(T2, ub=1.0, lb=0.1) #These are rather basic tests: npt.assert_equal(f_both.fir.shape, T2.shape) > npt.assert_equal(f_both.iir.shape, T2.shape)
Full build log available at https://hydra.nixos.org/log/12f43cyblp08zbjc5psd8ayxxmq3if72-python3.9-nitime-0.9.drv where all the python dependency versions can be seen. This is on an x86_64 linux system.
-
negative values in confidence interval of multi-taper coherence
First of all, I still need to read more carefully the references, so I might be wrong.
In the Multitaper coherence estimation tutorial, the confidence interval are computed (
t975_limit
andt025_limit
), but they are not printed or visualized in any way later. It turns out thatt025_limit
contains many negative values, but coherence is constrained to be within [0, 1].Is there anything going wrong?
-
tsa.periodogram() returns frequencies of all 0s when Fs=1.
Hi,
I noticed that the following code returns
freqs
which is all 0s:freqs, d_psd = tsa.periodogram(ar_seq, Fs=1., normalize=False)
I believe it is this line (in
algorithms/spectral.py
) that is causing the issue:freqs = np.linspace(0, Fs // 2, Fn)
Should it be
Fs / 2
instead?Version: nitime 0.9 Installed via conda
-
will it work for multivariate time series prediction both regression and classification
great code thanks may you clarify : will it work for multivariate time series prediction both regression and classification 1 where all values are continues values 2 or even will it work for multivariate time series where values are mixture of continues and categorical values for example 2 dimensions have continues values and 3 dimensions are categorical values
color weight gender height age
1 black 56 m 160 34 2 white 77 f 170 54 3 yellow 87 m 167 43 4 white 55 m 198 72 5 white 88 f 176 32
-
nitime not installing in Jupyter
Hello,
I installed nitime in general using the command window but afterwards was made aware that it has to be installed directly in Jupyter because otherwise it doesn't work there. However, when I try to install it in Jupyter (using: "! pip install nitime") , the kernel just remains busy and nothing happens (I gave it 3 hours). Weird thing is that I tried the same command to install another random package ("geocoder") en this immediately worked.
Does anyone know why nitime won't install?
Thanks in advance!
-
feature request: multiple `p` values for `detect_lines`
i'd like to perform harmonic analysis with two different p-values on the same signal. all other parameters the same. it's a huge waste to call
utils.detect_lines
twice i would think. FFT has to be done twice etc. is there a workaround where i can save partial results? how easy would it be to add a method which input multiple p-values? thanks!
Releases(rel/0.9)
-
rel/0.9(Dec 19, 2020)
A data preprocessing package for time series data. Design for machine learning and deep learning.
A data preprocessing package for time series data. Design for machine learning and deep learning.
A quick reference guide to the most commonly used patterns and functions in PySpark SQL
Using PySpark we can process data from Hadoop HDFS, AWS S3, and many file systems. PySpark also is used to process real-time data using Streaming and
Gaussian Process Optimization using GPy
End of maintenance for GPyOpt Dear GPyOpt community! We would like to acknowledge the obvious. The core team of GPyOpt has moved on, and over the past
Formulae is a Python library that implements Wilkinson's formulas for mixed-effects models.
formulae formulae is a Python library that implements Wilkinson's formulas for mixed-effects models. The main difference with other implementations li
DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions. DoWhy is based on a unified language for causal inference, combining causal graphical models and potential outcomes frameworks.
DoWhy | An end-to-end library for causal inference Amit Sharma, Emre Kiciman Introducing DoWhy and the 4 steps of causal inference | Microsoft Researc
Can a machine learning project be implemented to estimate the salaries of baseball players whose salary information and career statistics for 1986 are shared?
END TO END MACHINE LEARNING PROJECT ON HITTERS DATASET Can a machine learning project be implemented to estimate the salaries of baseball players whos
A Tools that help Data Scientists and ML engineers train and deploy ML models.
Domino Research This repo contains projects under active development by the Domino R&D team. We build tools that help Data Scientists and ML engineers
Adaptive: parallel active learning of mathematical functions
adaptive Adaptive: parallel active learning of mathematical functions. adaptive is an open-source Python library designed to make adaptive parallel fu
Forecasting prices using Facebook/Meta's Prophet model
CryptoForecasting using Machine and Deep learning (Part 1) CryptoForecasting using Machine Learning The main aspect of predicting the stock-related da
Retrieve annotated intron sequences and classify them as minor (U12-type) or major (U2-type)
(intron I nterrogator and C lassifier) intronIC is a program that can be used to classify intron sequences as minor (U12-type) or major (U2-type), usi
Fundamentals of Machine Learning
Fundamentals-of-Machine-Learning This repository introduces the basics of machine learning algorithms for preprocessing, regression and classification
Bayesian Additive Regression Trees For Python
BartPy Introduction BartPy is a pure python implementation of the Bayesian additive regressions trees model of Chipman et al [1]. Reasons to use BART
ThunderSVM: A Fast SVM Library on GPUs and CPUs
What's new We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs. add scikit-learn interface, see here Overview The miss
TorchDrug is a PyTorch-based machine learning toolbox designed for drug discovery
A powerful and flexible machine learning platform for drug discovery
DaCeML - Machine learning powered by data-centric parallel programming.
Data-centric machine learning powered by DaCe
ml4ir: Machine Learning for Information Retrieval
ml4ir: Machine Learning for Information Retrieval | changelog Quickstart → ml4ir Read the Docs | ml4ir pypi | python ReadMe ml4ir is an open source li
Applied Machine Learning for Graduate Program in Computer Science (PPGCC)
Applied Machine Learning for Graduate Program in Computer Science (PPGCC) - Federal University of Santa Catarina
K-Means clusternig example with Python and Scikit-learn
Unsupervised-Machine-Learning Flat Clustering K-Means clusternig example with Python and Scikit-learn Flat clustering Clustering algorithms group a se
Open-Source CI/CD platform for ML teams. Deliver ML products, better & faster. ⚡️🧑🔧
Deliver ML products, better & faster Giskard is an Open-Source CI/CD platform for ML teams. Inspect ML models visually from your Python notebook 📗 Re
fMRIprep Pipeline To Machine Learning
fMRIprep Pipeline To Machine Learning(Demo) 所有配置均在config.py文件下定义 前置环境(lilab) 各个节点均安装docker,并有fmripre的镜像 可以使用conda中的base环境(相应的第三份包之后更新) 1. fmriprep scr