Visual profiler for Python

Overview

PyPI

vprof

vprof is a Python package providing rich and interactive visualizations for various Python program characteristics such as running time and memory usage. It supports Python 3.4+ and distributed under BSD license.

The project is in active development and some of its features might not work as expected.

Screenshots

vprof-gif

Contributing

All contributions are highly encouraged! You can add new features, report and fix existing bugs and write docs and tutorials. Feel free to open an issue or send a pull request!

Prerequisites

Dependencies to build vprof from source code:

  • Python 3.4+
  • pip
  • npm >= 3.3.12

npm is required to build vprof from sources only.

Dependencies

All Python and npm module dependencies are listed in package.json and requirements.txt.

Installation

vprof can be installed from PyPI

pip install vprof

To build vprof from sources, clone this repository and execute

python3 setup.py deps_install && python3 setup.py build_ui && python3 setup.py install

To install just vprof dependencies, run

python3 setup.py deps_install

Usage

vprof -c <config> <src>

<config> is a combination of supported modes:

  • c - CPU flame graph ⚠️ Not available for windows #62

Shows CPU flame graph for <src>.

  • p - profiler

Runs built-in Python profiler on <src> and displays results.

  • m - memory graph

Shows objects that are tracked by CPython GC and left in memory after code execution. Also shows process memory usage after execution of each line of <src>.

  • h - code heatmap

Displays all executed code of <src> with line run times and execution counts.

<src> can be Python source file (e.g. testscript.py) or path to package (e.g. myproject/test_package).

To run scripts with arguments use double quotes

vprof -c cmh "testscript.py --foo --bar"

Modes can be combined

vprof -c cm testscript.py

vprof can also profile functions. In order to do this, launch vprof in remote mode:

vprof -r

vprof will open new tab in default web browser and then wait for stats.

To profile a function run

from vprof import runner

def foo(arg1, arg2):
    ...

runner.run(foo, 'cmhp', args=(arg1, arg2), host='localhost', port=8000)

where cmhp is profiling mode, host and port are hostname and port of vprof server launched in remote mode. Obtained stats will be rendered in new tab of default web browser, opened by vprof -r command.

vprof can save profile stats to file and render visualizations from previously saved file.

vprof -c cmh src.py --output-file profile.json

writes profile to file and

vprof --input-file profile.json

renders visualizations from previously saved file.

Check vprof -h for full list of supported parameters.

To show UI help, press h when visualizations are displayed.

Also you can check examples directory for more profiling examples.

Testing

python3 setup.py test_python && python3 setup.py test_javascript && python3 setup.py e2e_test

License

BSD

Comments
  • `Exception happened during processing of request from ('127.0.0.1', 51937)`

    `Exception happened during processing of request from ('127.0.0.1', 51937)`

    Thanks for making this useful project available!

    I pip installed it (Mac OS 10.11.3, Anaconda Python 3.5.1), and then tried running it. I get the messages saying that it's running the various profiling functionalities (depending on the flags), and then a blank browser tab opens pointing to localhost:8000. The message in the title then appears in the terminal.

    Any ideas? Thanks again

    bug 
    opened by arokem 23
  • Heatmap creator hangs if called with another profilers

    Heatmap creator hangs if called with another profilers

    I'm trying to profile http://scikit-image.org/docs/dev/auto_examples/filters/plot_inpaint.html . vprof {ch/hc/mh/hm} plot_inpaint.py works units of seconds, while vprof cmh ... doesn't finish even within 10mins.

    bug 
    opened by soupault 17
  • Ability to choose profiled files

    Ability to choose profiled files

    Description

    I tested you package against django tests, after I saw what you were capable of doing using code heatmap, which renders great by the way. So I thought I would give it a try on my Django tests because selenium is taking long time, and a heatmap would definitely reveal what parts are slow so I can focus on them.

    How to reproduce

    $ vprof -c h "manage.py test --failfast"

    Actual results

    The code heatmap is only showing the manage.py file, would it be possible to add an option to monitor my test files (i.e. tests.py or any other file).

    Expected results

    I would like to have several files show up in the profiling result (that I pick beforehand)

    Version and platform

    vprof==0.36.1 MacOS 10.12.5 Python 3.5.2

    feature request 
    opened by SebCorbin 14
  • Some imports in Python 3 break flame graph rendering

    Some imports in Python 3 break flame graph rendering

    I ran vprof -c cmh -s "evaluate.py -o 7496a0471ba8259926b36028b79b5a0e62ecf03c -d 510 -f 260 -s 25" and no results were presented in any way. If I understand correctly there should appear new tabs in my browser but that didn't happen.

    Also: I had to pass the hash without single quotation marks as vprof would put another pair around them. Is this intended?

    Platform: python3.5:

    Python 3.5.2 (default, Jun 28 2016, 08:46:01) 
    [GCC 6.1.1 20160602] on linux
    

    Chromium Version 53.0.2785.116 (64-bit)

    I am running Antergos linux

    bug 
    opened by sims1253 13
  • `python setup.py deps_install` doesn't complete properly

    `python setup.py deps_install` doesn't complete properly

    I'm not sure if it's deps_install that's failing, but whenever I open vprof in remote mode, http://localhost:/ shows a spinning circle that keeps on spinning forever. build_ui and install throw no noticeable errors. Here's the output from deps_install:

    (mvenv)$ python setup.py deps_install
    running deps_install
    Requirement already satisfied (use --upgrade to upgrade): psutil>=3.4.2 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r requirements.txt (line 1))
    Requirement already satisfied (use --upgrade to upgrade): six>=1.10.0 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): mock>=1.0.0 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r dev_requirements.txt (line 1))
    Requirement already satisfied (use --upgrade to upgrade): pylint>=1.5.4 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): six in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from pylint>=1.5.4->-r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): astroid<1.5.0,>=1.4.5 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from pylint>=1.5.4->-r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): colorama in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from pylint>=1.5.4->-r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): wrapt in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from astroid<1.5.0,>=1.4.5->pylint>=1.5.4->-r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): lazy-object-proxy in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from astroid<1.5.0,>=1.4.5->pylint>=1.5.4->-r dev_requirements.txt (line 2))
    npm WARN package.json [email protected] No README data
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/engine.io/node_modules/engine.io-parser requires has-binary@'0.1.6' but will load
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/has-binary,
    npm WARN unmet dependency which is version 0.1.7
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client requires component-emitter@'1.1.2' but will load
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/socket.io-client/node_modules/component-emitter,
    npm WARN unmet dependency which is version 1.2.0
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/socket.io-adapter/node_modules/socket.io-parser requires debug@'0.7.4' but will load
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/debug,
    npm WARN unmet dependency which is version 2.2.0
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/phantomjs-prebuilt/node_modules/request/node_modules/http-signature/node_modules/sshpk requires assert-plus@'^1.0.0' but will load
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/phantomjs-prebuilt/node_modules/request/node_modules/http-signature/node_modules/assert-plus,
    npm WARN unmet dependency which is version 0.2.0
    

    My self-test outputs are as follows:

    (mvenv)$ python setup.py test
    running test
    testGetRunDispatcher (base_profile_test.BaseProfileUnittest) ... ok
    testInit_RunObjFunction (base_profile_test.BaseProfileUnittest) ... ok
    testInit_RunObjImportedPackage (base_profile_test.BaseProfileUnittest) ... ok
    testInit_RunObjModule (base_profile_test.BaseProfileUnittest) ... ok
    testInit_RunObjPackagePath (base_profile_test.BaseProfileUnittest) ... ok
    testRun (base_profile_test.BaseProfileUnittest) ... ok
    testRunAsFunction (base_profile_test.BaseProfileUnittest) ... ok
    testRunAsModule (base_profile_test.BaseProfileUnittest) ... ok
    testRunAsPackageInNamespace (base_profile_test.BaseProfileUnittest) ... ok
    testRunAsPackagePath (base_profile_test.BaseProfileUnittest) ... ok
    testGetPackageCode (base_profile_test.GetPackageCodeUnittest) ... ok
    testAddCode (code_heatmap_test.CodeHeatmapCalculator) ... ok
    testCalcHeatmap (code_heatmap_test.CodeHeatmapCalculator) ... ok
    testInit (code_heatmap_test.CodeHeatmapCalculator) ... ok
    testAddCode (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTraceMemoryUsage_EmptyEventsList (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTraceMemoryUsage_NormalUsage (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTraceMemoryUsage_OtherCode (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTraceMemoryUsage_SameLine (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTransformStats (runtime_profile_test.RuntimeProfileUnittest) ... ok
    
    ----------------------------------------------------------------------
    Ran 20 tests in 0.022s
    
    OK
    
    > vprof-frontend@ test /Users/ali/repo/vprof
    > karma start
    
    19 05 2016 16:37:59.894:INFO [framework.browserify]: bundle built
    19 05 2016 16:37:59.907:INFO [karma]: Karma v0.13.22 server started at http://localhost:9876/
    19 05 2016 16:37:59.914:INFO [launcher]: Starting browser PhantomJS
    19 05 2016 16:38:01.173:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket /#-QYG9HhjEzuR9d5NAAAA with id 80466447
    PhantomJS 2.1.1 (Mac OS X 0.0.0): Executed 3 of 3 SUCCESS (0.003 secs / 0.002 secs)
    (mvenv)$ python setup.py e2e_test
    running e2e_test
    testRequest (code_heatmap_e2e.CodeHeatmapFunctionEndToEndTest) ... ok
    testRequest (code_heatmap_e2e.CodeHeatmapImportedPackageEndToEndTest) ... ok
    testRequest (code_heatmap_e2e.CodeHeatmapModuleEndToEndTest) ... ok
    testRequest (code_heatmap_e2e.CodeHeatmapPackageAsPathEndToEndTest) ... ok
    testRequest (memory_profile_e2e.MemoryProfileFunctionEndToEndTest) ... ok
    testRequest (memory_profile_e2e.MemoryProfileImportedPackageEndToEndTest) ... ok
    testRequest (memory_profile_e2e.MemoryProfileModuleEndToEndTest) ... ok
    testRequest (memory_profile_e2e.MemoryProfilePackageAsPathEndToEndTest) ... ok
    testRequest (runtime_profile_e2e.RuntimeProfileFunctionEndToEndTest) ... ok
    testRequest (runtime_profile_e2e.RuntimeProfileImportedPackageEndToEndTest) ... ok
    testRequest (runtime_profile_e2e.RuntimeProfileModuleEndToEndTest) ... ok
    testRequest (runtime_profile_e2e.RuntimeProfilePackageAsPathEndToEndTest) ... ok
    
    ----------------------------------------------------------------------
    Ran 12 tests in 6.114s
    
    OK
    

    I'm sure it's something really straightforward I've missed, because I didn't see any issues like this in other open (or closed) issues.

    opened by alichaudry 9
  • Remove entries < 0.2% of total runtime

    Remove entries < 0.2% of total runtime

    If you have a lot of files (say, thousands), many of them aren't going to account for more than 0.X% of total runtime. However they will be rendered in the result list, making the UI load for a very long time or crash.

    This is a working PoC of removing any entry <0.2% of total runtime from the results prior to serving them to the UI.

    Another approach may be to include only the top 99% (or whatever) time consumers, but this didn't work as well as the fixed threshold approach.

    opened by jonashaag 6
  • Allow code heatmap to show time taken per line as well as execution count

    Allow code heatmap to show time taken per line as well as execution count

    Thanks for a great tool! This visual profiler is what I was looking for in Python.

    I was surprised though that the code heatmap only shows the execution count for the colour highlighting, and for the mouseovers. For me it is far more interesting to know how long each line took, not just how many times it is run. Of course this is also somewhat in the flame graph, but not in the same format.

    It would be great to be able to select what the code heatmap shows, effectively allowing the user to choose between the columns of line_profiler. This could be in the command-line options, but ideally could be in a drop-down menu in the code heatmap screen. I believe matlab's profiler offers something similar.

    Thanks

    feature request 
    opened by mdfirman 6
  • vprof crashes with a stacktrace on scripts with Python 3.6.1 32-bit and Windows

    vprof crashes with a stacktrace on scripts with Python 3.6.1 32-bit and Windows

    Description

    vprof crashes with a stacktrace on scripts with Python 3.6.1 32-bit and Windows

    How to reproduce
    • Use Windows 10.
    • Install Python 3.6.1 32 bit. (Also had issues with 64 bit.)
    • Type pip install vprof
    • create a script that says print("Hi") and save it as test.py
    • On the command line type vprof -c p test.py
    Actual results
    C:\>vprof -c p test.py
    Running Profiler...
    Traceback (most recent call last):
      File "c:\program files (x86)\python36-32\lib\runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "c:\program files (x86)\python36-32\lib\runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "C:\Program Files (x86)\Python36-32\Scripts\vprof.exe\__main__.py", line 9, in <module>
      File "c:\program files (x86)\python36-32\lib\site-packages\vprof\__main__.py", line 88, in main
        source, config, verbose=True)
      File "c:\program files (x86)\python36-32\lib\site-packages\vprof\runner.py", line 78, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "c:\program files (x86)\python36-32\lib\site-packages\vprof\base_profiler.py", line 162, in run
        return dispatcher()
      File "c:\program files (x86)\python36-32\lib\site-packages\vprof\base_profiler.py", line 71, in multiprocessing_wrapper
        process.start()
      File "c:\program files (x86)\python36-32\lib\multiprocessing\process.py", line 105, in start
        self._popen = self._Popen(self)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\context.py", line 223, in _Popen
        return _default_context.get_context().Process._Popen(process_obj)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\context.py", line 322, in _Popen
        return Popen(process_obj)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
        reduction.dump(process_obj, to_child)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\reduction.py", line 60, in dump
        ForkingPickler(file, protocol).dump(obj)
    AttributeError: Can't pickle local object 'run_in_another_process.<locals>.multiprocessing_wrapper.<locals>.remote_wrapper'
    
    C:\>Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "c:\program files (x86)\python36-32\lib\multiprocessing\spawn.py", line 99, in spawn_main
        new_handle = reduction.steal_handle(parent_pid, pipe_handle)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\reduction.py", line 82, in steal_handle
        _winapi.PROCESS_DUP_HANDLE, False, source_pid)
    OSError: [WinError 87] The parameter is incorrect
    
    Expected results

    Not a stack trace.

    Version and platform

    Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 17:54:52) [MSC v.1900 32 bit (Intel)] on win32

    bug 
    opened by pvcraven 5
  • SIGPROF does not work on Windows

    SIGPROF does not work on Windows

    Python 3.5.1 (v3.5.1:37a07cee5969, Dec  6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32
    
    PS C:\Users\Admin\Downloads\trufont> vprof -c c Lib/trufont
    Running FlameGraphProfiler...
    Traceback (most recent call last):
      File "C:\Python35\Scripts\vprof-script.py", line 9, in <module>
        load_entry_point('vprof==0.34', 'console_scripts', 'vprof')()
      File "c:\python35\lib\site-packages\vprof\__main__.py", line 88, in main
        source, config, verbose=True)
      File "c:\python35\lib\site-packages\vprof\runner.py", line 78, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "c:\python35\lib\site-packages\vprof\flame_graph.py", line 169, in run
        prof = run_dispatcher()
      File "c:\python35\lib\site-packages\vprof\flame_graph.py", line 142, in run_as_package
        with _StatProfiler() as prof:
      File "c:\python35\lib\site-packages\vprof\flame_graph.py", line 29, in __enter__
        signal.signal(signal.SIGPROF, self.sample)
    AttributeError: module 'signal' has no attribute 'SIGPROF'
    
    bug known issue 
    opened by adrientetar 5
  • Don't suppress ImportErrors.

    Don't suppress ImportErrors.

    This is a tool for python programmers. We know what import errors are and they give us some idea what we're doing wrong. Replacing them with a vague error message doesn't help.

    opened by ryneeverett 5
  • Pass back return value from the profiled function

    Pass back return value from the profiled function

    vprof.runner.run(...) ignored return value of the function profiled. Would it be possible to pass it back?

    (I would like to submit pull request but it actually is not as straightforward as I thought :/)

    feature request 
    opened by eudoxos 3
  • Latest `highlight.js` node module breaks the build

    Latest `highlight.js` node module breaks the build

    Description

    Fails to build with latest hightlight.js node module.

    How to reproduce

    Get the latest highlight.js and run python setup.py build_ui.

    Fix

    I looked into the commits over on highlight.js and found this commit stating

    github-gist has been removed in favor of github [Jan Pilzer][]

    If I update highlight.css and change

    @import url('node_modules/highlight.js/styles/github-gist.css');

    to

    @import url('node_modules/highlight.js/styles/github.css');

    everything works.

    opened by JesseBuesking 0
  • How to use on Windows

    How to use on Windows

    On Windows, I did pip3 install vprof, but that didn't create any vprof.bat nor vprof.exe anywhere, so when I try something like vprof -c <config> <src> in the command line (as suggested by the readme), I get "'vprof' is not recognized as an internal or external command, operable program or batch file."

    Should I add something specific to my environment variables?

    opened by page200 1
  • Profiling command-line based scripts that change terminal settings results in a termios error

    Profiling command-line based scripts that change terminal settings results in a termios error

    Description

    Profiling a CLI script results in this error:

    Traceback (most recent call last):
      File "/usr/local/bin/vprof", line 8, in <module>
        sys.exit(main())
      File "/usr/local/lib/python3.9/site-packages/vprof/__main__.py", line 83, in main
        program_stats = runner.run_profilers(source, config, verbose=True)
      File "/usr/local/lib/python3.9/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/usr/local/lib/python3.9/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/usr/local/lib/python3.9/site-packages/vprof/flame_graph.py", line 167, in profile_module
        return base_profiler.run_in_separate_process(self._profile_module)
      File "/usr/local/lib/python3.9/site-packages/vprof/base_profiler.py", line 79, in run_in_separate_process
        raise exc
    termios.error: (19, 'Operation not supported by device')
    
    How to reproduce

    run vprof -c ch Test.py inside this folder.

    Actual results

    This happens after a while

    Traceback (most recent call last):
      File "/usr/local/bin/vprof", line 8, in <module>
        sys.exit(main())
      File "/usr/local/lib/python3.9/site-packages/vprof/__main__.py", line 83, in main
        program_stats = runner.run_profilers(source, config, verbose=True)
      File "/usr/local/lib/python3.9/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/usr/local/lib/python3.9/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/usr/local/lib/python3.9/site-packages/vprof/flame_graph.py", line 167, in profile_module
        return base_profiler.run_in_separate_process(self._profile_module)
      File "/usr/local/lib/python3.9/site-packages/vprof/base_profiler.py", line 79, in run_in_separate_process
        raise exc
    termios.error: (19, 'Operation not supported by device')
    
    Expected results

    (without profiler)
    Screenshot 2021-11-24 at 10 39 15 AM (side note: ctrl+z to quit Test.py)

    Version and platform

    vprof 0.38
    MacOs 10.14.6
    Screenshot 2021-11-24 at 10 41 48 AM

    opened by lomnom 0
  • object of type 'int' has no len()

    object of type 'int' has no len()

    Description

    run command

    vprof -c cmh test.py 
    

    Error message :

    Running MemoryProfiler...
    Building prefix dict from the default dictionary ...
    Loading model from cache /tmp/jieba.cache
    Loading model cost 0.488 seconds.
    Prefix dict has been built successfully.
    /home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py:144: UserWarning: torch.distributed.reduce_op is deprecated, please use torch.distributed.ReduceOp instead
      warnings.warn("torch.distributed.reduce_op is deprecated, please use "
    Running FlameGraphProfiler...
    Traceback (most recent call last):
      File "/home/binnz/miniconda3/envs/transformer-tf/bin/vprof", line 8, in <module>
        sys.exit(main())
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/__main__.py", line 83, in main
        program_stats = runner.run_profilers(source, config, verbose=True)
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/flame_graph.py", line 167, in profile_module
        return base_profiler.run_in_separate_process(self._profile_module)
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/base_profiler.py", line 79, in run_in_separate_process
        raise exc
    TypeError: object of type 'int' has no len()
    
    Version and platform

    Linux binnz-MS-7B89 5.4.0-66-generic #74~18.04.2-Ubuntu SMP Fri Feb 5 11:17:31 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

    vprof version: 0.38 python version: 3.8.8

    opened by binnz 0
  • Fails with

    Fails with "IndexError: list index out of range" when running with -h (heatmap)

    Description

    I'm not really sure where it's coming from, but I'm getting the following error when I run the profiler in heatmap mode (with the -h flag).

    How to reproduce

    Haven't been able to find out exactly what causes it yet. I just ran my script as vprof -c h "/path/to/script.py --some_param" < input_file.txt

    Actual results

    Fails with

    Traceback (most recent call last):
      File "/home/ldorigo/.local/bin/vprof", line 8, in <module>
        sys.exit(main())
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/__main__.py", line 83, in main
        program_stats = runner.run_profilers(source, config, verbose=True)
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/code_heatmap.py", line 217, in profile_module
        return base_profiler.run_in_separate_process(self._profile_module)
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/base_profiler.py", line 79, in run_in_separate_process
        raise exc
    IndexError: list index out of range   
    
    Expected results

    Makes a line heatmap :)

    Version and platform

    Python 3.9.1 on linux, vprof 0.38

    opened by ldorigo 1
  • [Help] How to test twisted code

    [Help] How to test twisted code

    Description

    I have sample example, when i run, but an error occurred.

    from twisted.internet import reactor, task
    from vprof import runner
    
    
    def ppp():
        print('called')
    
    
    def profile():
        reactor.callLater(3, reactor.stop)
        t = task.LoopingCall(ppp)
        t.start(0.5)
        reactor.run()
    
    
    if __name__ == '__main__':
        runner.run(profile, 'cmhp')
    
    
    Traceback (most recent call last):
      File "/home/kevin/workspaces/develop/python/crawlerstack/crawlerstack_proxypool/demo.py", line 19, in <module>
        runner.run(profile, 'cmhp')
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/vprof/runner.py", line 91, in run
        run_stats = run_profilers((func, args, kwargs), options)
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/vprof/flame_graph.py", line 172, in profile_function
        result = self._run_object(*self._run_args, **self._run_kwargs)
      File "/home/kevin/workspaces/develop/python/crawlerstack/crawlerstack_proxypool/demo.py", line 15, in profile
        reactor.run()
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/twisted/internet/base.py", line 1282, in run
        self.startRunning(installSignalHandlers=installSignalHandlers)
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/twisted/internet/base.py", line 1262, in startRunning
        ReactorBase.startRunning(self)
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/twisted/internet/base.py", line 765, in startRunning
        raise error.ReactorNotRestartable()
    twisted.internet.error.ReactorNotRestartable
    
    Version and platform
    • vprof 0.38
    • Linux 5.4.70-amd64-desktop #1 SMP Wed Oct 14 15:24:23 CST 2020 x86_64 GNU/Linux
    • Python 3.7.3
    • twisted 20.3.0
    question 
    opened by whg517 0
Releases(v0.38)
  • v0.38(Feb 29, 2020)

  • v0.37.6(Jun 24, 2018)

  • v0.37.5(Jun 24, 2018)

  • v0.37.4(Dec 10, 2017)

  • v0.37.3(Oct 31, 2017)

  • v0.37.2(Oct 12, 2017)

  • v0.37.1(Sep 26, 2017)

  • v0.37(Jul 20, 2017)

    vprof 0.37 has been released.

    New features

    • Now you can see run time distribution for modules under "Inspected modules".
    • Also you can get info about current tab by pressing the "?" button in the top right corner.
    • Bug fixes and performance improvements.
    Source code(tar.gz)
    Source code(zip)
  • v0.36.1(May 1, 2017)

  • v0.36(Apr 9, 2017)

    vprof 0.36 has been released.

    New features

    Code heatmap shows run times for individual lines

    Now you can see run time distribution over executed lines on code heatmap tab. The feature is experimental and will be improved in future versions.

    4ebd7ec4-042d-11e7-9489-e821b449bd6f

    Source code(tar.gz)
    Source code(zip)
  • v0.35(Jan 12, 2017)

    vprof 0.35 is ready!

    New features

    Consistent flame graph coloring

    Now every function in flame graph has it's own color allowing to explore function call patterns. screen shot 2017-01-12 at 11 54 46

    Source code(tar.gz)
    Source code(zip)
  • v0.34(Nov 17, 2016)

    vprof 0.34 has been released!

    Thanks to all contributors!

    New features

    Render visualizations from file

    vprof is able to save profile stats to file and render visualizations from previously saved file now.

    vprof -c cmh src.py --output-file profile.json
    

    writes profile to file and

    vprof --input-file profile.json
    

    renders visualizations from previously saved file.

    Please note that stats rendering is supported for the same version only (e.g. vprof 0.35 won't render stats from 0.34).

    Flame graph uses statistical profiler

    Flame graph now uses statistical profiler instead of cProfile.

    "Profiler" tab

    New "Profiler" tab shows how long and how long often various parts of the programs are executed.

    profile

    Other changes

    -s flag has been removed from CLI.

    Please, check `vprof -h' or README.md for more info.

    Source code(tar.gz)
    Source code(zip)
  • v0.33(Sep 21, 2016)

    Hooray, vprof 0.33 has been released!

    New features

    "Objects left in memory" table

    Shows objects that are tracked by CPython garbage collector and left in memory after code execution.

    memory-stats

    Improvements

    • Faster memory stats rendering.
    • More accurate memory usage measurement.
    • All data between client and server is gzipped now.
    • Minor UI adjustments.
    Source code(tar.gz)
    Source code(zip)
  • v0.32(Jun 30, 2016)

  • v0.31(Jun 12, 2016)

  • v0.3(May 19, 2016)

    • Add support for profiling single functions.
    • Add support for profiling Python packages.
    • Improve UI performance.
    • Bug fixes and other performance improvements.
    Source code(tar.gz)
    Source code(zip)
  • v0.22(Mar 2, 2016)

  • v0.21(Feb 22, 2016)

  • v0.2(Feb 10, 2016)

Owner
Nick Volynets
Nick Volynets
System monitor - A python-based real-time system monitoring tool

System monitor A python-based real-time system monitoring tool Screenshots Installation Run My project with these commands pip install -r requiremen

Sachit Yadav 4 Feb 11, 2022
Real-time metrics for nginx server

ngxtop - real-time metrics for nginx server (and others) ngxtop parses your nginx access log and outputs useful, top-like, metrics of your nginx serve

Binh Le 6.4k Dec 22, 2022
Automatically monitor the evolving performance of Flask/Python web services.

Flask Monitoring Dashboard A dashboard for automatic monitoring of Flask web-services. Key Features • How to use • Live Demo • Feedback • Documentatio

663 Dec 29, 2022
Yet Another Python Profiler, but this time thread&coroutine&greenlet aware.

Yappi Yet Another Python Profiler, but this time thread&coroutine&greenlet aware. Highlights Fast: Yappi is fast. It is completely written in C and lo

Sümer Cip 1k Jan 01, 2023
Cobalt Strike random C2 Profile generator

Random C2 Profile Generator Cobalt Strike random C2 Profile generator Author: Joe Vest (@joevest) This project is designed to generate malleable c2 pr

Threat Express 482 Jan 08, 2023
ASGI middleware to record and emit timing metrics (to something like statsd)

timing-asgi This is a timing middleware for ASGI, useful for automatic instrumentation of ASGI endpoints. This was developed at GRID for use with our

Steinn Eldjárn Sigurðarson 99 Nov 21, 2022
Visual profiler for Python

vprof vprof is a Python package providing rich and interactive visualizations for various Python program characteristics such as running time and memo

Nick Volynets 3.9k Dec 19, 2022
Cross-platform lib for process and system monitoring in Python

Home Install Documentation Download Forum Blog Funding What's new Summary psutil (process and system utilities) is a cross-platform library for retrie

Giampaolo Rodola 9k Jan 02, 2023
ScoutAPM Python Agent. Supports Django, Flask, and many other frameworks.

Scout Python APM Agent Monitor the performance of Python Django apps, Flask apps, and Celery workers with Scout's Python APM Agent. Detailed performan

Scout APM 59 Nov 26, 2022
Monitor Memory usage of Python code

Memory Profiler This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for pyth

Fabian Pedregosa 80 Nov 18, 2022
Call-graph profiling for TwinCAT 3

Twingrind This project brings profiling to TwinCAT PLCs. The general idea of the implementation is as follows. Twingrind is a TwinCAT library that inc

stefanbesler 10 Oct 12, 2022
Exports osu! user stats to prometheus metrics for a specified set of users

osu! to prometheus exporter This tool exports osu! user statistics into prometheus metrics for a specified set of user ids. Just copy the config.json.

Peter Oettig 1 Feb 24, 2022
Prometheus exporter for Flask applications

Prometheus Flask exporter This library provides HTTP request metrics to export into Prometheus. It can also track method invocations using convenient

Viktor Adam 535 Dec 23, 2022
Sampling profiler for Python programs

py-spy: Sampling profiler for Python programs py-spy is a sampling profiler for Python programs. It lets you visualize what your Python program is spe

Ben Frederickson 9.5k Jan 08, 2023
Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automatically use request headers such as x-request-id or x-correlation-id.

starlette context Middleware for Starlette that allows you to store and access the context data of a request. Can be used with logging so logs automat

Tomasz Wójcik 300 Dec 26, 2022
Sentry is cross-platform application monitoring, with a focus on error reporting.

Users and logs provide clues. Sentry provides answers. What's Sentry? Sentry is a service that helps you monitor and fix crashes in realtime. The serv

Sentry 33k Jan 04, 2023
Display machine state using Python3 with Flask.

Flask-State English | 简体中文 Flask-State is a lightweight chart plugin for displaying machine state data in your web application. Monitored Metric: CPU,

622 Dec 18, 2022
Output provisioning profiles in a diffable way

normalize-profile This tool reads Apple's provisioning profile files and produces reproducible output perfect for diffing. You can easily integrate th

Keith Smiley 8 Oct 18, 2022
Monitor Memory usage of Python code

Memory Profiler This is a python module for monitoring memory consumption of a process as well as line-by-line analysis of memory consumption for pyth

3.7k Dec 30, 2022