Visual profiler for Python

Overview

PyPI

vprof

vprof is a Python package providing rich and interactive visualizations for various Python program characteristics such as running time and memory usage. It supports Python 3.4+ and distributed under BSD license.

The project is in active development and some of its features might not work as expected.

Screenshots

vprof-gif

Contributing

All contributions are highly encouraged! You can add new features, report and fix existing bugs and write docs and tutorials. Feel free to open an issue or send a pull request!

Prerequisites

Dependencies to build vprof from source code:

  • Python 3.4+
  • pip
  • npm >= 3.3.12

npm is required to build vprof from sources only.

Dependencies

All Python and npm module dependencies are listed in package.json and requirements.txt.

Installation

vprof can be installed from PyPI

pip install vprof

To build vprof from sources, clone this repository and execute

python3 setup.py deps_install && python3 setup.py build_ui && python3 setup.py install

To install just vprof dependencies, run

python3 setup.py deps_install

Usage

vprof -c <config> <src>

<config> is a combination of supported modes:

  • c - CPU flame graph ⚠️ Not available for windows #62

Shows CPU flame graph for <src>.

  • p - profiler

Runs built-in Python profiler on <src> and displays results.

  • m - memory graph

Shows objects that are tracked by CPython GC and left in memory after code execution. Also shows process memory usage after execution of each line of <src>.

  • h - code heatmap

Displays all executed code of <src> with line run times and execution counts.

<src> can be Python source file (e.g. testscript.py) or path to package (e.g. myproject/test_package).

To run scripts with arguments use double quotes

vprof -c cmh "testscript.py --foo --bar"

Modes can be combined

vprof -c cm testscript.py

vprof can also profile functions. In order to do this, launch vprof in remote mode:

vprof -r

vprof will open new tab in default web browser and then wait for stats.

To profile a function run

from vprof import runner

def foo(arg1, arg2):
    ...

runner.run(foo, 'cmhp', args=(arg1, arg2), host='localhost', port=8000)

where cmhp is profiling mode, host and port are hostname and port of vprof server launched in remote mode. Obtained stats will be rendered in new tab of default web browser, opened by vprof -r command.

vprof can save profile stats to file and render visualizations from previously saved file.

vprof -c cmh src.py --output-file profile.json

writes profile to file and

vprof --input-file profile.json

renders visualizations from previously saved file.

Check vprof -h for full list of supported parameters.

To show UI help, press h when visualizations are displayed.

Also you can check examples directory for more profiling examples.

Testing

python3 setup.py test_python && python3 setup.py test_javascript && python3 setup.py e2e_test

License

BSD

Comments
  • `Exception happened during processing of request from ('127.0.0.1', 51937)`

    `Exception happened during processing of request from ('127.0.0.1', 51937)`

    Thanks for making this useful project available!

    I pip installed it (Mac OS 10.11.3, Anaconda Python 3.5.1), and then tried running it. I get the messages saying that it's running the various profiling functionalities (depending on the flags), and then a blank browser tab opens pointing to localhost:8000. The message in the title then appears in the terminal.

    Any ideas? Thanks again

    bug 
    opened by arokem 23
  • Heatmap creator hangs if called with another profilers

    Heatmap creator hangs if called with another profilers

    I'm trying to profile http://scikit-image.org/docs/dev/auto_examples/filters/plot_inpaint.html . vprof {ch/hc/mh/hm} plot_inpaint.py works units of seconds, while vprof cmh ... doesn't finish even within 10mins.

    bug 
    opened by soupault 17
  • Ability to choose profiled files

    Ability to choose profiled files

    Description

    I tested you package against django tests, after I saw what you were capable of doing using code heatmap, which renders great by the way. So I thought I would give it a try on my Django tests because selenium is taking long time, and a heatmap would definitely reveal what parts are slow so I can focus on them.

    How to reproduce

    $ vprof -c h "manage.py test --failfast"

    Actual results

    The code heatmap is only showing the manage.py file, would it be possible to add an option to monitor my test files (i.e. tests.py or any other file).

    Expected results

    I would like to have several files show up in the profiling result (that I pick beforehand)

    Version and platform

    vprof==0.36.1 MacOS 10.12.5 Python 3.5.2

    feature request 
    opened by SebCorbin 14
  • Some imports in Python 3 break flame graph rendering

    Some imports in Python 3 break flame graph rendering

    I ran vprof -c cmh -s "evaluate.py -o 7496a0471ba8259926b36028b79b5a0e62ecf03c -d 510 -f 260 -s 25" and no results were presented in any way. If I understand correctly there should appear new tabs in my browser but that didn't happen.

    Also: I had to pass the hash without single quotation marks as vprof would put another pair around them. Is this intended?

    Platform: python3.5:

    Python 3.5.2 (default, Jun 28 2016, 08:46:01) 
    [GCC 6.1.1 20160602] on linux
    

    Chromium Version 53.0.2785.116 (64-bit)

    I am running Antergos linux

    bug 
    opened by sims1253 13
  • `python setup.py deps_install` doesn't complete properly

    `python setup.py deps_install` doesn't complete properly

    I'm not sure if it's deps_install that's failing, but whenever I open vprof in remote mode, http://localhost:/ shows a spinning circle that keeps on spinning forever. build_ui and install throw no noticeable errors. Here's the output from deps_install:

    (mvenv)$ python setup.py deps_install
    running deps_install
    Requirement already satisfied (use --upgrade to upgrade): psutil>=3.4.2 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r requirements.txt (line 1))
    Requirement already satisfied (use --upgrade to upgrade): six>=1.10.0 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): mock>=1.0.0 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r dev_requirements.txt (line 1))
    Requirement already satisfied (use --upgrade to upgrade): pylint>=1.5.4 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from -r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): six in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from pylint>=1.5.4->-r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): astroid<1.5.0,>=1.4.5 in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from pylint>=1.5.4->-r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): colorama in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from pylint>=1.5.4->-r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): wrapt in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from astroid<1.5.0,>=1.4.5->pylint>=1.5.4->-r dev_requirements.txt (line 2))
    Requirement already satisfied (use --upgrade to upgrade): lazy-object-proxy in /Users/ali/.pyenv/versions/mvenv/lib/python2.7/site-packages (from astroid<1.5.0,>=1.4.5->pylint>=1.5.4->-r dev_requirements.txt (line 2))
    npm WARN package.json [email protected] No README data
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/engine.io/node_modules/engine.io-parser requires has-binary@'0.1.6' but will load
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/has-binary,
    npm WARN unmet dependency which is version 0.1.7
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client requires component-emitter@'1.1.2' but will load
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/socket.io-client/node_modules/component-emitter,
    npm WARN unmet dependency which is version 1.2.0
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/socket.io-adapter/node_modules/socket.io-parser requires debug@'0.7.4' but will load
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/karma/node_modules/socket.io/node_modules/debug,
    npm WARN unmet dependency which is version 2.2.0
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/phantomjs-prebuilt/node_modules/request/node_modules/http-signature/node_modules/sshpk requires assert-plus@'^1.0.0' but will load
    npm WARN unmet dependency /Users/ali/repo/vprof/node_modules/phantomjs-prebuilt/node_modules/request/node_modules/http-signature/node_modules/assert-plus,
    npm WARN unmet dependency which is version 0.2.0
    

    My self-test outputs are as follows:

    (mvenv)$ python setup.py test
    running test
    testGetRunDispatcher (base_profile_test.BaseProfileUnittest) ... ok
    testInit_RunObjFunction (base_profile_test.BaseProfileUnittest) ... ok
    testInit_RunObjImportedPackage (base_profile_test.BaseProfileUnittest) ... ok
    testInit_RunObjModule (base_profile_test.BaseProfileUnittest) ... ok
    testInit_RunObjPackagePath (base_profile_test.BaseProfileUnittest) ... ok
    testRun (base_profile_test.BaseProfileUnittest) ... ok
    testRunAsFunction (base_profile_test.BaseProfileUnittest) ... ok
    testRunAsModule (base_profile_test.BaseProfileUnittest) ... ok
    testRunAsPackageInNamespace (base_profile_test.BaseProfileUnittest) ... ok
    testRunAsPackagePath (base_profile_test.BaseProfileUnittest) ... ok
    testGetPackageCode (base_profile_test.GetPackageCodeUnittest) ... ok
    testAddCode (code_heatmap_test.CodeHeatmapCalculator) ... ok
    testCalcHeatmap (code_heatmap_test.CodeHeatmapCalculator) ... ok
    testInit (code_heatmap_test.CodeHeatmapCalculator) ... ok
    testAddCode (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTraceMemoryUsage_EmptyEventsList (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTraceMemoryUsage_NormalUsage (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTraceMemoryUsage_OtherCode (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTraceMemoryUsage_SameLine (memory_profile_test.CodeEventsTrackerUnittest) ... ok
    testTransformStats (runtime_profile_test.RuntimeProfileUnittest) ... ok
    
    ----------------------------------------------------------------------
    Ran 20 tests in 0.022s
    
    OK
    
    > vprof-frontend@ test /Users/ali/repo/vprof
    > karma start
    
    19 05 2016 16:37:59.894:INFO [framework.browserify]: bundle built
    19 05 2016 16:37:59.907:INFO [karma]: Karma v0.13.22 server started at http://localhost:9876/
    19 05 2016 16:37:59.914:INFO [launcher]: Starting browser PhantomJS
    19 05 2016 16:38:01.173:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket /#-QYG9HhjEzuR9d5NAAAA with id 80466447
    PhantomJS 2.1.1 (Mac OS X 0.0.0): Executed 3 of 3 SUCCESS (0.003 secs / 0.002 secs)
    (mvenv)$ python setup.py e2e_test
    running e2e_test
    testRequest (code_heatmap_e2e.CodeHeatmapFunctionEndToEndTest) ... ok
    testRequest (code_heatmap_e2e.CodeHeatmapImportedPackageEndToEndTest) ... ok
    testRequest (code_heatmap_e2e.CodeHeatmapModuleEndToEndTest) ... ok
    testRequest (code_heatmap_e2e.CodeHeatmapPackageAsPathEndToEndTest) ... ok
    testRequest (memory_profile_e2e.MemoryProfileFunctionEndToEndTest) ... ok
    testRequest (memory_profile_e2e.MemoryProfileImportedPackageEndToEndTest) ... ok
    testRequest (memory_profile_e2e.MemoryProfileModuleEndToEndTest) ... ok
    testRequest (memory_profile_e2e.MemoryProfilePackageAsPathEndToEndTest) ... ok
    testRequest (runtime_profile_e2e.RuntimeProfileFunctionEndToEndTest) ... ok
    testRequest (runtime_profile_e2e.RuntimeProfileImportedPackageEndToEndTest) ... ok
    testRequest (runtime_profile_e2e.RuntimeProfileModuleEndToEndTest) ... ok
    testRequest (runtime_profile_e2e.RuntimeProfilePackageAsPathEndToEndTest) ... ok
    
    ----------------------------------------------------------------------
    Ran 12 tests in 6.114s
    
    OK
    

    I'm sure it's something really straightforward I've missed, because I didn't see any issues like this in other open (or closed) issues.

    opened by alichaudry 9
  • Remove entries < 0.2% of total runtime

    Remove entries < 0.2% of total runtime

    If you have a lot of files (say, thousands), many of them aren't going to account for more than 0.X% of total runtime. However they will be rendered in the result list, making the UI load for a very long time or crash.

    This is a working PoC of removing any entry <0.2% of total runtime from the results prior to serving them to the UI.

    Another approach may be to include only the top 99% (or whatever) time consumers, but this didn't work as well as the fixed threshold approach.

    opened by jonashaag 6
  • Allow code heatmap to show time taken per line as well as execution count

    Allow code heatmap to show time taken per line as well as execution count

    Thanks for a great tool! This visual profiler is what I was looking for in Python.

    I was surprised though that the code heatmap only shows the execution count for the colour highlighting, and for the mouseovers. For me it is far more interesting to know how long each line took, not just how many times it is run. Of course this is also somewhat in the flame graph, but not in the same format.

    It would be great to be able to select what the code heatmap shows, effectively allowing the user to choose between the columns of line_profiler. This could be in the command-line options, but ideally could be in a drop-down menu in the code heatmap screen. I believe matlab's profiler offers something similar.

    Thanks

    feature request 
    opened by mdfirman 6
  • vprof crashes with a stacktrace on scripts with Python 3.6.1 32-bit and Windows

    vprof crashes with a stacktrace on scripts with Python 3.6.1 32-bit and Windows

    Description

    vprof crashes with a stacktrace on scripts with Python 3.6.1 32-bit and Windows

    How to reproduce
    • Use Windows 10.
    • Install Python 3.6.1 32 bit. (Also had issues with 64 bit.)
    • Type pip install vprof
    • create a script that says print("Hi") and save it as test.py
    • On the command line type vprof -c p test.py
    Actual results
    C:\>vprof -c p test.py
    Running Profiler...
    Traceback (most recent call last):
      File "c:\program files (x86)\python36-32\lib\runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "c:\program files (x86)\python36-32\lib\runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "C:\Program Files (x86)\Python36-32\Scripts\vprof.exe\__main__.py", line 9, in <module>
      File "c:\program files (x86)\python36-32\lib\site-packages\vprof\__main__.py", line 88, in main
        source, config, verbose=True)
      File "c:\program files (x86)\python36-32\lib\site-packages\vprof\runner.py", line 78, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "c:\program files (x86)\python36-32\lib\site-packages\vprof\base_profiler.py", line 162, in run
        return dispatcher()
      File "c:\program files (x86)\python36-32\lib\site-packages\vprof\base_profiler.py", line 71, in multiprocessing_wrapper
        process.start()
      File "c:\program files (x86)\python36-32\lib\multiprocessing\process.py", line 105, in start
        self._popen = self._Popen(self)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\context.py", line 223, in _Popen
        return _default_context.get_context().Process._Popen(process_obj)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\context.py", line 322, in _Popen
        return Popen(process_obj)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
        reduction.dump(process_obj, to_child)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\reduction.py", line 60, in dump
        ForkingPickler(file, protocol).dump(obj)
    AttributeError: Can't pickle local object 'run_in_another_process.<locals>.multiprocessing_wrapper.<locals>.remote_wrapper'
    
    C:\>Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "c:\program files (x86)\python36-32\lib\multiprocessing\spawn.py", line 99, in spawn_main
        new_handle = reduction.steal_handle(parent_pid, pipe_handle)
      File "c:\program files (x86)\python36-32\lib\multiprocessing\reduction.py", line 82, in steal_handle
        _winapi.PROCESS_DUP_HANDLE, False, source_pid)
    OSError: [WinError 87] The parameter is incorrect
    
    Expected results

    Not a stack trace.

    Version and platform

    Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 17:54:52) [MSC v.1900 32 bit (Intel)] on win32

    bug 
    opened by pvcraven 5
  • SIGPROF does not work on Windows

    SIGPROF does not work on Windows

    Python 3.5.1 (v3.5.1:37a07cee5969, Dec  6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32
    
    PS C:\Users\Admin\Downloads\trufont> vprof -c c Lib/trufont
    Running FlameGraphProfiler...
    Traceback (most recent call last):
      File "C:\Python35\Scripts\vprof-script.py", line 9, in <module>
        load_entry_point('vprof==0.34', 'console_scripts', 'vprof')()
      File "c:\python35\lib\site-packages\vprof\__main__.py", line 88, in main
        source, config, verbose=True)
      File "c:\python35\lib\site-packages\vprof\runner.py", line 78, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "c:\python35\lib\site-packages\vprof\flame_graph.py", line 169, in run
        prof = run_dispatcher()
      File "c:\python35\lib\site-packages\vprof\flame_graph.py", line 142, in run_as_package
        with _StatProfiler() as prof:
      File "c:\python35\lib\site-packages\vprof\flame_graph.py", line 29, in __enter__
        signal.signal(signal.SIGPROF, self.sample)
    AttributeError: module 'signal' has no attribute 'SIGPROF'
    
    bug known issue 
    opened by adrientetar 5
  • Don't suppress ImportErrors.

    Don't suppress ImportErrors.

    This is a tool for python programmers. We know what import errors are and they give us some idea what we're doing wrong. Replacing them with a vague error message doesn't help.

    opened by ryneeverett 5
  • Pass back return value from the profiled function

    Pass back return value from the profiled function

    vprof.runner.run(...) ignored return value of the function profiled. Would it be possible to pass it back?

    (I would like to submit pull request but it actually is not as straightforward as I thought :/)

    feature request 
    opened by eudoxos 3
  • Latest `highlight.js` node module breaks the build

    Latest `highlight.js` node module breaks the build

    Description

    Fails to build with latest hightlight.js node module.

    How to reproduce

    Get the latest highlight.js and run python setup.py build_ui.

    Fix

    I looked into the commits over on highlight.js and found this commit stating

    github-gist has been removed in favor of github [Jan Pilzer][]

    If I update highlight.css and change

    @import url('node_modules/highlight.js/styles/github-gist.css');

    to

    @import url('node_modules/highlight.js/styles/github.css');

    everything works.

    opened by JesseBuesking 0
  • How to use on Windows

    How to use on Windows

    On Windows, I did pip3 install vprof, but that didn't create any vprof.bat nor vprof.exe anywhere, so when I try something like vprof -c <config> <src> in the command line (as suggested by the readme), I get "'vprof' is not recognized as an internal or external command, operable program or batch file."

    Should I add something specific to my environment variables?

    opened by page200 1
  • Profiling command-line based scripts that change terminal settings results in a termios error

    Profiling command-line based scripts that change terminal settings results in a termios error

    Description

    Profiling a CLI script results in this error:

    Traceback (most recent call last):
      File "/usr/local/bin/vprof", line 8, in <module>
        sys.exit(main())
      File "/usr/local/lib/python3.9/site-packages/vprof/__main__.py", line 83, in main
        program_stats = runner.run_profilers(source, config, verbose=True)
      File "/usr/local/lib/python3.9/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/usr/local/lib/python3.9/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/usr/local/lib/python3.9/site-packages/vprof/flame_graph.py", line 167, in profile_module
        return base_profiler.run_in_separate_process(self._profile_module)
      File "/usr/local/lib/python3.9/site-packages/vprof/base_profiler.py", line 79, in run_in_separate_process
        raise exc
    termios.error: (19, 'Operation not supported by device')
    
    How to reproduce

    run vprof -c ch Test.py inside this folder.

    Actual results

    This happens after a while

    Traceback (most recent call last):
      File "/usr/local/bin/vprof", line 8, in <module>
        sys.exit(main())
      File "/usr/local/lib/python3.9/site-packages/vprof/__main__.py", line 83, in main
        program_stats = runner.run_profilers(source, config, verbose=True)
      File "/usr/local/lib/python3.9/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/usr/local/lib/python3.9/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/usr/local/lib/python3.9/site-packages/vprof/flame_graph.py", line 167, in profile_module
        return base_profiler.run_in_separate_process(self._profile_module)
      File "/usr/local/lib/python3.9/site-packages/vprof/base_profiler.py", line 79, in run_in_separate_process
        raise exc
    termios.error: (19, 'Operation not supported by device')
    
    Expected results

    (without profiler)
    Screenshot 2021-11-24 at 10 39 15 AM (side note: ctrl+z to quit Test.py)

    Version and platform

    vprof 0.38
    MacOs 10.14.6
    Screenshot 2021-11-24 at 10 41 48 AM

    opened by lomnom 0
  • object of type 'int' has no len()

    object of type 'int' has no len()

    Description

    run command

    vprof -c cmh test.py 
    

    Error message :

    Running MemoryProfiler...
    Building prefix dict from the default dictionary ...
    Loading model from cache /tmp/jieba.cache
    Loading model cost 0.488 seconds.
    Prefix dict has been built successfully.
    /home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py:144: UserWarning: torch.distributed.reduce_op is deprecated, please use torch.distributed.ReduceOp instead
      warnings.warn("torch.distributed.reduce_op is deprecated, please use "
    Running FlameGraphProfiler...
    Traceback (most recent call last):
      File "/home/binnz/miniconda3/envs/transformer-tf/bin/vprof", line 8, in <module>
        sys.exit(main())
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/__main__.py", line 83, in main
        program_stats = runner.run_profilers(source, config, verbose=True)
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/flame_graph.py", line 167, in profile_module
        return base_profiler.run_in_separate_process(self._profile_module)
      File "/home/binnz/miniconda3/envs/transformer-tf/lib/python3.8/site-packages/vprof/base_profiler.py", line 79, in run_in_separate_process
        raise exc
    TypeError: object of type 'int' has no len()
    
    Version and platform

    Linux binnz-MS-7B89 5.4.0-66-generic #74~18.04.2-Ubuntu SMP Fri Feb 5 11:17:31 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

    vprof version: 0.38 python version: 3.8.8

    opened by binnz 0
  • Fails with

    Fails with "IndexError: list index out of range" when running with -h (heatmap)

    Description

    I'm not really sure where it's coming from, but I'm getting the following error when I run the profiler in heatmap mode (with the -h flag).

    How to reproduce

    Haven't been able to find out exactly what causes it yet. I just ran my script as vprof -c h "/path/to/script.py --some_param" < input_file.txt

    Actual results

    Fails with

    Traceback (most recent call last):
      File "/home/ldorigo/.local/bin/vprof", line 8, in <module>
        sys.exit(main())
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/__main__.py", line 83, in main
        program_stats = runner.run_profilers(source, config, verbose=True)
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/code_heatmap.py", line 217, in profile_module
        return base_profiler.run_in_separate_process(self._profile_module)
      File "/home/ldorigo/.local/lib/python3.9/site-packages/vprof/base_profiler.py", line 79, in run_in_separate_process
        raise exc
    IndexError: list index out of range   
    
    Expected results

    Makes a line heatmap :)

    Version and platform

    Python 3.9.1 on linux, vprof 0.38

    opened by ldorigo 1
  • [Help] How to test twisted code

    [Help] How to test twisted code

    Description

    I have sample example, when i run, but an error occurred.

    from twisted.internet import reactor, task
    from vprof import runner
    
    
    def ppp():
        print('called')
    
    
    def profile():
        reactor.callLater(3, reactor.stop)
        t = task.LoopingCall(ppp)
        t.start(0.5)
        reactor.run()
    
    
    if __name__ == '__main__':
        runner.run(profile, 'cmhp')
    
    
    Traceback (most recent call last):
      File "/home/kevin/workspaces/develop/python/crawlerstack/crawlerstack_proxypool/demo.py", line 19, in <module>
        runner.run(profile, 'cmhp')
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/vprof/runner.py", line 91, in run
        run_stats = run_profilers((func, args, kwargs), options)
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/vprof/runner.py", line 73, in run_profilers
        run_stats[option] = curr_profiler.run()
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/vprof/base_profiler.py", line 172, in run
        return self.profile()
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/vprof/flame_graph.py", line 172, in profile_function
        result = self._run_object(*self._run_args, **self._run_kwargs)
      File "/home/kevin/workspaces/develop/python/crawlerstack/crawlerstack_proxypool/demo.py", line 15, in profile
        reactor.run()
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/twisted/internet/base.py", line 1282, in run
        self.startRunning(installSignalHandlers=installSignalHandlers)
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/twisted/internet/base.py", line 1262, in startRunning
        ReactorBase.startRunning(self)
      File "/home/kevin/.virtualenvs/crawlerstack_proxypool-VZQM9sgQ/lib/python3.7/site-packages/twisted/internet/base.py", line 765, in startRunning
        raise error.ReactorNotRestartable()
    twisted.internet.error.ReactorNotRestartable
    
    Version and platform
    • vprof 0.38
    • Linux 5.4.70-amd64-desktop #1 SMP Wed Oct 14 15:24:23 CST 2020 x86_64 GNU/Linux
    • Python 3.7.3
    • twisted 20.3.0
    question 
    opened by whg517 0
Releases(v0.38)
  • v0.38(Feb 29, 2020)

  • v0.37.6(Jun 24, 2018)

  • v0.37.5(Jun 24, 2018)

  • v0.37.4(Dec 10, 2017)

  • v0.37.3(Oct 31, 2017)

  • v0.37.2(Oct 12, 2017)

  • v0.37.1(Sep 26, 2017)

  • v0.37(Jul 20, 2017)

    vprof 0.37 has been released.

    New features

    • Now you can see run time distribution for modules under "Inspected modules".
    • Also you can get info about current tab by pressing the "?" button in the top right corner.
    • Bug fixes and performance improvements.
    Source code(tar.gz)
    Source code(zip)
  • v0.36.1(May 1, 2017)

  • v0.36(Apr 9, 2017)

    vprof 0.36 has been released.

    New features

    Code heatmap shows run times for individual lines

    Now you can see run time distribution over executed lines on code heatmap tab. The feature is experimental and will be improved in future versions.

    4ebd7ec4-042d-11e7-9489-e821b449bd6f

    Source code(tar.gz)
    Source code(zip)
  • v0.35(Jan 12, 2017)

    vprof 0.35 is ready!

    New features

    Consistent flame graph coloring

    Now every function in flame graph has it's own color allowing to explore function call patterns. screen shot 2017-01-12 at 11 54 46

    Source code(tar.gz)
    Source code(zip)
  • v0.34(Nov 17, 2016)

    vprof 0.34 has been released!

    Thanks to all contributors!

    New features

    Render visualizations from file

    vprof is able to save profile stats to file and render visualizations from previously saved file now.

    vprof -c cmh src.py --output-file profile.json
    

    writes profile to file and

    vprof --input-file profile.json
    

    renders visualizations from previously saved file.

    Please note that stats rendering is supported for the same version only (e.g. vprof 0.35 won't render stats from 0.34).

    Flame graph uses statistical profiler

    Flame graph now uses statistical profiler instead of cProfile.

    "Profiler" tab

    New "Profiler" tab shows how long and how long often various parts of the programs are executed.

    profile

    Other changes

    -s flag has been removed from CLI.

    Please, check `vprof -h' or README.md for more info.

    Source code(tar.gz)
    Source code(zip)
  • v0.33(Sep 21, 2016)

    Hooray, vprof 0.33 has been released!

    New features

    "Objects left in memory" table

    Shows objects that are tracked by CPython garbage collector and left in memory after code execution.

    memory-stats

    Improvements

    • Faster memory stats rendering.
    • More accurate memory usage measurement.
    • All data between client and server is gzipped now.
    • Minor UI adjustments.
    Source code(tar.gz)
    Source code(zip)
  • v0.32(Jun 30, 2016)

  • v0.31(Jun 12, 2016)

  • v0.3(May 19, 2016)

    • Add support for profiling single functions.
    • Add support for profiling Python packages.
    • Improve UI performance.
    • Bug fixes and other performance improvements.
    Source code(tar.gz)
    Source code(zip)
  • v0.22(Mar 2, 2016)

  • v0.21(Feb 22, 2016)

  • v0.2(Feb 10, 2016)

Owner
Nick Volynets
Nick Volynets
VizTracer is a low-overhead logging/debugging/profiling tool that can trace and visualize your python code execution.

VizTracer is a low-overhead logging/debugging/profiling tool that can trace and visualize your python code execution.

2.8k Jan 08, 2023
Trashdbg - TrashDBG the world's worse debugger

The world's worse debugger Over the course of multiple OALABS Twitch streams we

OALabs 21 Jun 17, 2022
Django package to log request values such as device, IP address, user CPU time, system CPU time, No of queries, SQL time, no of cache calls, missing, setting data cache calls for a particular URL with a basic UI.

django-web-profiler's documentation: Introduction: django-web-profiler is a django profiling tool which logs, stores debug toolbar statistics and also

MicroPyramid 77 Oct 29, 2022
OpenCodeBlocks an open-source tool for modular visual programing in python

OpenCodeBlocks OpenCodeBlocks is an open-source tool for modular visual programing in python ! Although for now the tool is in Beta and features are c

Mathïs Fédérico 1.1k Jan 06, 2023
🍦 Never use print() to debug again.

IceCream -- Never use print() to debug again Do you ever use print() or log() to debug your code? Of course you do. IceCream, or ic for short, makes p

Ansgar Grunseid 6.5k Jan 07, 2023
EDB 以太坊单合约交易调试工具

EDB 以太坊单合约交易调试工具 Idea 在刷题的时候遇到一类JOP(Jump-Oriented-Programming)的题目,fuzz或者调试这类题目缺少简单易用的工具,由此开发了一个简单的调试工具EDB(The Ethereum Debugger),利用debug_traceTransact

16 May 21, 2022
Dahua Console, access internal debug console and/or other researched functions in Dahua devices.

Dahua Console, access internal debug console and/or other researched functions in Dahua devices.

bashis 156 Dec 28, 2022
A gdb-like Python3 Debugger in the Trepan family

Abstract Features More Exact location information Debugging Python bytecode (no source available) Source-code Syntax Colorization Command Completion T

R. Bernstein 126 Nov 24, 2022
An improbable web debugger through WebSockets

wdb - Web Debugger Description wdb is a full featured web debugger based on a client-server architecture. The wdb server which is responsible of manag

Kozea 1.6k Dec 09, 2022
Sampling profiler for Python programs

py-spy: Sampling profiler for Python programs py-spy is a sampling profiler for Python programs. It lets you visualize what your Python program is spe

Ben Frederickson 9.5k Jan 08, 2023
Integration of IPython pdb

IPython pdb Use ipdb exports functions to access the IPython debugger, which features tab completion, syntax highlighting, better tracebacks, better i

Godefroid Chapelle 1.7k Jan 07, 2023
Full featured multi arch/os debugger built on top of PyQt5 and frida

Full featured multi arch/os debugger built on top of PyQt5 and frida

iGio90 1.1k Dec 26, 2022
Hdbg - Historical Debugger

hdbg - Historical Debugger This is in no way a finished product. Do not use this

Fivreld 2 Jan 02, 2022
一个小脚本,用于trace so中native函数的调用。

trace_natives 一个IDA小脚本,获取SO代码段中所有函数的偏移地址,再使用frida-trace 批量trace so函数的调用。 使用方法 1.将traceNatives.py丢进IDA plugins目录中 2.IDA中,Edit-Plugins-traceNatives IDA输

296 Dec 28, 2022
pdb++, a drop-in replacement for pdb (the Python debugger)

pdb++, a drop-in replacement for pdb What is it? This module is an extension of the pdb module of the standard library. It is meant to be fully compat

1k Jan 02, 2023
Sane color handling of osx's accent and highlight color from the commandline

osx-colors Sane command line color customisation for osx, no more fiddling about with defaults, internal apple color constants and rgb color codes Say

Clint Plummer 8 Nov 17, 2022
Python's missing debug print command and other development tools.

python devtools Python's missing debug print command and other development tools. For more information, see documentation. Install Just pip install de

Samuel Colvin 637 Jan 02, 2023
Middleware that Prints the number of DB queries to the runserver console.

Django Querycount Inspired by this post by David Szotten, this project gives you a middleware that prints DB query counts in Django's runserver consol

Brad Montgomery 332 Dec 23, 2022
GDB plugin for streaming defmt messages over RTT from e.g. JLinkGDBServer

Defmt RTT plugin from GDB This small plugin runs defmt-print on the RTT stream produced by JLinkGDBServer, so that you can see the defmt logs in the G

Gaute Hope 1 Dec 30, 2021
The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.

LM-Debugger is an open-source interactive tool for inspection and intervention in transformer-based language models. This repository includes the code

Mor Geva 110 Dec 28, 2022