A faster and highly-compatible implementation of the Python programming language. The code here is out of date, please follow our blog

Overview

Pyston is a faster and highly-compatible implementation of the Python programming language. Version 2 is currently closed source, but you can find the old v1 source code under the v1.0 tag in this repository.

For updates, please follow our blog

Comments
  • Change build system?

    Change build system?

    Have you plan to move to an other build system like Cmake or ~~autotools~~? This will be in your roadmap #11 ?

    Thanks! And sorry to disturb you but this questions seems important.

    idea 
    opened by darnuria 30
  • Trying to fix Issue #171

    Trying to fix Issue #171

    I tried to fix issue #171 and I also added a line $(LLVM_CONFIGURE_LINE) --disable-bindings in Makefile because if I don't have -disable-bindings flag, I won't be able to configure LLVM properly, there is an error saying since I have enabled bindings then I must install ctypes on my machine, which I have installed.

    Back to bug #171, I tried to add some simple identity testing in __contains__ and __eq__ method. But I don't know for sure this is sufficient. I wonder should what does compareInternal do exactly? Should we put the identity test there?

    Also, this is my first pull request, I have signed the agreement. Thanks!

    opened by ghost 19
  • cmake change default build directory, configure script, Fedora support

    cmake change default build directory, configure script, Fedora support

    Tried to hijack #441 for this, but couldn't once I deleted the original branch.

    This changes the build directories to $(SRC_DIR)/build/Debug $(SRC_DIR)/build/Release $(SRC_DIR)/build/Debug-gcc

    I think the usual complaint about polluting the source directory is when the object files are right next to source, but I guess we'll see. It's easy to override BUILD_DIR.

    opened by dagar 18
  • Add a new JIT tier which is tightly coupled to the AST Interpreter

    Add a new JIT tier which is tightly coupled to the AST Interpreter

    This JIT is tightly coupled to the ASTInterpreter, at every CFGBlock* entry/exit on can switch between interpreting and directly executing the generated code without having to do any translations. Generating the code is pretty fast compared to the LLVM tier but the generated code is not as fast as code generated by the higher LLVM tiers. But because the JITed can use runtime ICs, avoids a lot of interpretation overhead and stores CFGBlock locals symbols inside register/stack slots, it's much faster than the interpreter.

    Current perf numbers:

    1. run:
                  pyston (calibration)                      :    1.0s base1: 1.0 (+1.5%)
                  pyston django_template.py                 :    5.2s base1: 6.7 (-23.0%)
                  pyston pyxl_bench.py                      :    4.6s base1: 5.0 (-8.0%)
                  pyston sqlalchemy_imperative2.py          :    5.7s base1: 6.8 (-14.9%)
                  pyston django_migrate.py                  :    1.9s base1: 3.1 (-38.4%)
                  pyston virtualenv_bench.py                :    6.2s base1: 8.1 (-22.9%)
                  pyston interp2.py                         :    3.7s base1: 4.1 (-9.9%)
                  pyston raytrace.py                        :    5.8s base1: 6.0 (-3.5%)
                  pyston nbody.py                           :    7.3s base1: 7.0 (+4.1%)
                  pyston fannkuch.py                        :    6.3s base1: 6.4 (-1.5%)
                  pyston chaos.py                           :   17.5s base1: 18.1 (-3.6%)
                  pyston fasta.py                           :    4.5s base1: 4.3 (+5.1%)
                  pyston pidigits.py                        :    5.8s base1: 5.3 (+8.7%)
                  pyston richards.py                        :    1.9s base1: 1.7 (+10.2%)
                  pyston deltablue.py                       :    1.4s base1: 1.8 (-20.4%)
                  pyston (geomean-10eb)                     :    4.6s base1: 5.1 (-9.5%)
    

    best of 3 runs:

                  pyston (calibration)                      :    1.0s base3: 1.0 (-0.5%)
                  pyston django_template.py                 :    4.4s base3: 4.6 (-5.4%)
                  pyston pyxl_bench.py                      :    3.9s base3: 3.8 (+2.5%)
                  pyston sqlalchemy_imperative2.py          :    4.9s base3: 5.1 (-5.1%)
                  pyston django_migrate.py                  :    1.7s base3: 1.7 (+1.7%)
                  pyston virtualenv_bench.py                :    5.0s base3: 5.2 (-2.4%)
                  pyston interp2.py                         :    3.6s base3: 4.1 (-12.4%)
                  pyston raytrace.py                        :    5.5s base3: 5.4 (+1.4%)
                  pyston nbody.py                           :    7.0s base3: 7.2 (-2.6%)
                  pyston fannkuch.py                        :    6.1s base3: 6.3 (-2.0%)
                  pyston chaos.py                           :   16.8s base3: 17.5 (-4.2%)
                  pyston fasta.py                           :    4.4s base3: 4.1 (+5.8%)
                  pyston pidigits.py                        :    5.7s base3: 5.3 (+8.2%)
                  pyston richards.py                        :    1.7s base3: 1.5 (+11.3%)
                  pyston deltablue.py                       :    1.2s base3: 1.4 (-12.9%)
                  pyston (geomean-10eb)                     :    4.2s base3: 4.3 (-1.4%)
    
    
    

    because of the increased malloc traffic this PR should benefit a lot from jemalloc...

    opened by undingen 16
  • Improve Travis CI testing speed

    Improve Travis CI testing speed

    @dagar First of all, thanks for getting it working, it's definitely a huge help :)

    I wonder if there's something we can do about the testing time though, which I assume comes mostly from having to compile LLVM. I don't think we can easily get rid of our source dependency on LLVM, but we end up building the same stuff every time so maybe there's some way to cache it? We could probably come up with some hash of {llvm_rev, patchset}, and map that to the final build products that we use. A simpler option could be to enable ccache for the Travis build and have Travis cache the .ccache directory.

    What do you think? Not hugely important at this stage but could be nice.

    old_v1 
    opened by kmod 16
  • Preparation for finalizer code, refactoring some simple_destructor logic.

    Preparation for finalizer code, refactoring some simple_destructor logic.

    Mostly moving simple_destructor functions into tp_dealloc instead of their own field. Replace the field with a boolean to indicate whether the tp_dealloc has the safety properties of a simple_destructor. Assign simple_destructor to more builtin types.

    Depends on gcfinalizers2 PR.

    opened by rudi-c 14
  • Incremental traceback

    Incremental traceback

    This works, and results in a pretty decent speedup for django-template.py, from ~10s to ~8s.

    opening a PR to let travis-ci do the slow tests. need to clean up the changes and figure out a better way to deal with the state I'm carrying around during unwinding.

    opened by toshok 14
  • slots (WIP)

    slots (WIP)

    There are still a couple of tests failing (including the one with the recursive GC call that I talked about on Gitter... I'm still not sure if that's partially the fault of this diff or not), but I'll go ahead and put it up here. Also, this ended up being one of those diffs where I fix one thing and that leads to fixing another thing...

    Anyway, here is what is happening in this diff:

    • In typeNew, I add support for __slots__. This adds a member descriptor for each name in the slots array. Also, if __dict__ is in the slots, then it adds a dict (well, really an attrs, since it's pyston).
    • Hopefully this should reduce allocations since most objects with slots won't have to allocate attrs.
    • To garbage collect the slots, we need to keep track of the slot locations in the type object. CPython lists PyMemberDefs at the end of a type object. We don't need the full PyMemberDef struct (as far as I know--maybe the c api requires it though), but for now, I just put the slot offsets in a variable array in the type object.
    • In the process of writing the above, logic, I stumbled across a tangential problem with the way we add a dict (or in our case, attrs) to a class when subclassing a class that is variable-size. This is tricky, since you have to leave the base layout alone, but that means you don't have a fixed offset to put the dict afterwards. CPython solves this by allowing tp_dictoffset to be negative, so you can offset it from the end of the object. From the [https://docs.python.org/2/c-api/typeobj.html#c.PyTypeObject.tp_dictoffset](python docs):

    If the value is less than zero, it specifies the offset from the end of the instance structure. A negative offset is more expensive to use, and should only be used when the instance structure contains a variable-length part. This is used for example to add an instance variable dictionary to subtypes of str or tuple. Note that the tp_basicsize field should account for the dictionary added to the end in that case, even though the dictionary is not included in the basic object layout.

    In the process of supporting this, I had to make sure that string and tuple subclassed VarBox and set the ob_size field. It's a bit messy and I think more can be done here to clean things up, but I didn't want to dig too far on this since, again, it's kind of tangential.

    opened by tjhance 14
  • Some minor fixing which found in numpy tests

    Some minor fixing which found in numpy tests

    • Add __format__ function to long object. Use generic format function(object_format) will have many redundant calls and can not work with d format encode.

    Mark as "WIP", because it may contains other minor NumPy fixing soon.

    changes requested 
    opened by Daetalus 13
  • Call finalizers during garbage collection with ordering

    Call finalizers during garbage collection with ordering

    Attempt #2 at a PR implementing PyPy's finalization order algorithm. Depends on PR #662 and #666

    As far as I can tell, 1/3 of the slowdown for django_template comes from the extra switch statement added in TraceStack which now has 3 different modes, and the other 2/3 of the slowdown comes from always adding unwind info in doSafePoint. fasta.py is suffering a massive slowdown because of large numbers of PyCapsule objects which don't seem to be safe to turn into simple destructors.

                  pyston (calibration)                      :    1.0s baseline: 1.0 (+0.1%)
                  pyston django_template.py                 :    4.6s baseline: 4.5 (+2.1%)
                  pyston pyxl_bench.py                      :    3.7s baseline: 3.6 (+3.6%)
                  pyston sqlalchemy_imperative2.py          :    5.2s baseline: 5.1 (+2.8%)
                  pyston django_migrate.py                  :    1.8s baseline: 1.7 (+3.3%)
                  pyston virtualenv_bench.py                :    5.3s baseline: 5.2 (+3.4%)
                  pyston interp2.py                         :    3.9s baseline: 3.9 (-0.2%)
                  pyston raytrace.py                        :    5.7s baseline: 5.3 (+7.4%)
                  pyston nbody.py                           :    7.1s baseline: 6.8 (+5.8%)
                  pyston fannkuch.py                        :    6.1s baseline: 5.8 (+5.1%)
                  pyston chaos.py                           :   18.0s baseline: 16.0 (+12.8%)
                  pyston fasta.py                           :    5.0s baseline: 4.1 (+22.2%)
                  pyston pidigits.py                        :    4.9s baseline: 4.7 (+4.7%)
                  pyston richards.py                        :    1.5s baseline: 1.4 (+1.3%)
                  pyston deltablue.py                       :    1.4s baseline: 1.3 (+4.0%)
                  pyston (geomean-10eb)                     :    4.3s baseline: 4.1 (+5.5%)
    
    opened by rudi-c 13
  • Add a cache for JITed object code

    Add a cache for JITed object code

    This is a simple cache - we still need to generate the IR but if we can find a cache file created for the exact same IR we will load it and skip instruction selection etc...

    Our benchmark test suite does not benefit much from this patch because the JITing time is for most tests too short.

    • I see a +2% slowdown on the first run against unmodified pyston (we have to do additional hashing, disk writes, pointer relocating)
    • on the second run it turns into a -2% speedup.
    • django_setup.py improves from 2.3sec to about 1.6sec.

    The cache works be creating symbols for all embedded pointers which change from one pyston start to the next using the same executable (Fixed executable specific addresses will be directly emitted in order to reduce the amount of symbols which will need relocations). The IR for the module in text representation will get crc32 hashed (I'm very open to other ideas :-)) and if we find a file with the same hash in the pyston_object_cache directory we will use it. While this approach is very safe, it fails if the IR output has not the same variable names, line numbers,... That's why I changed the variable name assignment to us a incremental index in the name instead of the pointer value as string. Even after this patch there are still a few instances of nondeterministic IR output (but by far the most cases should get handled), I plan to improve this in a follow up patch. On our devel machines we will generate a huge amount of cache files with this patch because the cache file only works for the exact same executable. I plan to generate a hash of the pyston executable and save the cache file in a directory with the hash name. And remove on startup all directories which do not contain this hash. Better ideas? Another issue is that the IR is now more difficult to read because the patchpoints func destinations will all call to a dummy -1 address but there is a option to disable the cache if one has to debug something.

    opened by undingen 13
  • Benchmark confirmed 8% diff to Python 3.8!

    Benchmark confirmed 8% diff to Python 3.8!

    Hi!

    I am the maintainer of python-speed benchmark (https://github.com/vprelovac/python-speed) and I could confirm the increased speed in Pyston. It is about 8% overall (not exactly 20% that is claimed) mostly due to much better stack handling (which shows about 40% speed increase). Overall fantastic job and looking forward to next release!

    Python 3 python-speed v1.2 using python v3.8.5 string/mem: 2144.74 ms pi calc/math: 2939.3 ms regex: 2950.72 ms fibonnaci/stack: 1845.43 ms

    total: 9880.2 ms (lower is better)

    Pyston 3 python-speed v1.2 using python v3.8.2 string/mem: 2152.05 ms pi calc/math: 2971.2 ms regex: 2707.27 ms fibonnaci/stack: 1197.36 ms

    total: 9027.89 ms (lower is better)

    opened by vprelovac 1
  • Building against pyston libraries

    Building against pyston libraries

    Is there any 2.1 release distribution of pyston which includes libpyston3.8.a or libpyston3.8.so? I'm needing to build other software which requires the libpython*.a, libpython*.so files for Python bindings.

    opened by bzoracler 0
  • Question about JIT_MAX_MEM flag

    Question about JIT_MAX_MEM flag

    Hi there and congratulations on the new release. I was wondering if you had any guidelines for setting the JIT_MAX_MEM env variable. Also I was wondering if it takes a full number or if = '100MB' would work etc.

    opened by mladen5000 1
  • Installation using pyenv

    Installation using pyenv

    pyenv is the de-facto standard for installing and managing multiple Python versions. Travis-CI uses it to build their supported Python interpreters.

    We should ensure that installing pyston with pyenv is possible.

    opened by thedrow 1
Releases(master)
x2 - a miniminalistic, open-source language created by iiPython

x2 is a miniminalistic, open-source language created by iiPython, inspired by x86 assembly and batch. It is a high-level programming language with low-level, easy-to-remember syntaxes, similar to x86

Benjamin 3 Jul 29, 2022
Cython plugin for Lark, reimplementing the LALR parser & lexer for better performance

Lark-Cython Cython plugin for Lark, reimplementing the LALR parser & lexer for better performance on CPython. Install: pip install lark-cython Usage:

Lark - Parsing Library & Toolkit 31 Dec 26, 2022
CPython Extension Module Support for Flit

CPython Extension Module Support for Flit This is a PEP 517 build backend piggybacking (and hacking) Flit to support building C extensions. Mostly a p

Tzu-ping Chung 8 May 24, 2022
The Stackless Python programming language

This is Python version 3.7.0 alpha 4+ Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 20

Stackless Python 891 Jan 03, 2023
Rust syntax and lexical analyzer implemented in Python.

Rust Scanner Rust syntax and lexical analyzer implemented in Python. This project was made for the Programming Languages class at ESPOL (SOFG1009). Me

Joangie Marquez 0 Jul 03, 2022
A faster and highly-compatible implementation of the Python programming language. The code here is out of date, please follow our blog

Pyston is a faster and highly-compatible implementation of the Python programming language. Version 2 is currently closed source, but you can find the

4.9k Dec 21, 2022
A faster and highly-compatible implementation of the Python programming language.

Pyston Pyston is a fork of CPython 3.8.8 with additional optimizations for performance. It is targeted at large real-world applications such as web se

2.3k Jan 09, 2023
A mini implementation of python library.

minipy author = RQDYSGN date = 2021.10.11 version = 0.2 1. 简介 基于python3.7环境,通过py原生库和leetcode上的一些习题构建的超小型py lib。 2. 环境 Python 3.7 2. 结构 ${project_name}

RQDYGSN 2 Oct 26, 2021
An implementation of Python in Common Lisp

CLPython - an implementation of Python in Common Lisp CLPython is an open-source implementation of Python written in Common Lisp. With CLPython you ca

Willem Broekema 339 Jan 04, 2023
Core Python libraries ported to MicroPython

This is a repository of libraries designed to be useful for writing MicroPython applications.

MicroPython 1.8k Jan 07, 2023
Grumpy is a Python to Go source code transcompiler and runtime.

Grumpy: Go running Python Overview Grumpy is a Python to Go source code transcompiler and runtime that is intended to be a near drop-in replacement fo

Google 10.6k Dec 24, 2022
DO NOT USE. Implementation of Python 3.x for .NET Framework that is built on top of the Dynamic Language Runtime.

IronPython 3 IronPython3 is NOT ready for use yet. There is still much that needs to be done to support Python 3.x. We are working on it, albeit slowl

IronLanguages 2k Dec 30, 2022
Pyjion - A JIT for Python based upon CoreCLR

Pyjion Designing a JIT API for CPython A note on development Development has moved to https://github.com/tonybaloney/Pyjion FAQ What are the goals of

Microsoft 1.6k Dec 30, 2022
MicroPython - a lean and efficient Python implementation for microcontrollers and constrained systems

The MicroPython project This is the MicroPython project, which aims to put an implementation of Python 3.x on microcontrollers and small embedded syst

MicroPython 15.7k Dec 31, 2022
Python for .NET is a package that gives Python programmers nearly seamless integration with the .NET Common Language Runtime (CLR) and provides a powerful application scripting tool for .NET developers.

pythonnet - Python.NET Python.NET is a package that gives Python programmers nearly seamless integration with the .NET Common Language Runtime (CLR) a

3.5k Jan 06, 2023
The Python programming language

This is Python version 3.10.0 alpha 5 Copyright (c) 2001-2021 Python Software Foundation. All rights reserved. See the end of this file for further co

Python 49.7k Dec 30, 2022
x86-64 assembler embedded in Python

Portable Efficient Assembly Code-generator in Higher-level Python (PeachPy) PeachPy is a Python framework for writing high-performance assembly kernel

Marat Dukhan 1.7k Jan 03, 2023
wxPython's Project Phoenix. A new implementation of wxPython, better, stronger, faster than he was before.

wxPython Project Phoenix Introduction Welcome to wxPython's Project Phoenix! Phoenix is the improved next-generation wxPython, "better, stronger, fast

1.9k Jan 07, 2023