MessagePack serializer implementation for Python msgpack.org[Python]

Overview

MessagePack for Python

Build Status Documentation Status

What's this

MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it's faster and smaller. This package provides CPython bindings for reading and writing MessagePack data.

Very important notes for existing users

PyPI package name

Package name on PyPI was changed from msgpack-python to msgpack from 0.5.

When upgrading from msgpack-0.4 or earlier, do pip uninstall msgpack-python before pip install -U msgpack.

Compatibility with the old format

You can use use_bin_type=False option to pack bytes object into raw type in the old msgpack spec, instead of bin type in new msgpack spec.

You can unpack old msgpack format using raw=True option. It unpacks str (raw) type in msgpack into Python bytes.

See note below for detail.

Major breaking changes in msgpack 1.0

  • Python 2

    • The extension module does not support Python 2 anymore. The pure Python implementation (msgpack.fallback) is used for Python 2.
  • Packer

    • use_bin_type=True by default. bytes are encoded in bin type in msgpack. If you are still using Python 2, you must use unicode for all string types. You can use use_bin_type=False to encode into old msgpack format.
    • encoding option is removed. UTF-8 is used always.
  • Unpacker

    • raw=False by default. It assumes str types are valid UTF-8 string and decode them to Python str (unicode) object.
    • encoding option is removed. You can use raw=True to support old format.
    • Default value of max_buffer_size is changed from 0 to 100 MiB.
    • Default value of strict_map_key is changed to True to avoid hashdos. You need to pass strict_map_key=False if you have data which contain map keys which type is not bytes or str.

Install

$ pip install msgpack

Pure Python implementation

The extension module in msgpack (msgpack._cmsgpack) does not support Python 2 and PyPy.

But msgpack provides a pure Python implementation (msgpack.fallback) for PyPy and Python 2.

Windows

When you can't use a binary distribution, you need to install Visual Studio or Windows SDK on Windows. Without extension, using pure Python implementation on CPython runs slowly.

How to use

NOTE: In examples below, I use raw=False and use_bin_type=True for users using msgpack < 1.0. These options are default from msgpack 1.0 so you can omit them.

One-shot pack & unpack

Use packb for packing and unpackb for unpacking. msgpack provides dumps and loads as an alias for compatibility with json and pickle.

pack and dump packs to a file-like object. unpack and load unpacks from a file-like object.

>>> import msgpack
>>> msgpack.packb([1, 2, 3], use_bin_type=True)
'\x93\x01\x02\x03'
>>> msgpack.unpackb(_, raw=False)
[1, 2, 3]

unpack unpacks msgpack's array to Python's list, but can also unpack to tuple:

>>> msgpack.unpackb(b'\x93\x01\x02\x03', use_list=False, raw=False)
(1, 2, 3)

You should always specify the use_list keyword argument for backward compatibility. See performance issues relating to use_list option_ below.

Read the docstring for other options.

Streaming unpacking

Unpacker is a "streaming unpacker". It unpacks multiple objects from one stream (or from bytes provided through its feed method).

import msgpack
from io import BytesIO

buf = BytesIO()
for i in range(100):
   buf.write(msgpack.packb(i, use_bin_type=True))

buf.seek(0)

unpacker = msgpack.Unpacker(buf, raw=False)
for unpacked in unpacker:
    print(unpacked)

Packing/unpacking of custom data type

It is also possible to pack/unpack custom data types. Here is an example for datetime.datetime.

import datetime
import msgpack

useful_dict = {
    "id": 1,
    "created": datetime.datetime.now(),
}

def decode_datetime(obj):
    if '__datetime__' in obj:
        obj = datetime.datetime.strptime(obj["as_str"], "%Y%m%dT%H:%M:%S.%f")
    return obj

def encode_datetime(obj):
    if isinstance(obj, datetime.datetime):
        return {'__datetime__': True, 'as_str': obj.strftime("%Y%m%dT%H:%M:%S.%f")}
    return obj


packed_dict = msgpack.packb(useful_dict, default=encode_datetime, use_bin_type=True)
this_dict_again = msgpack.unpackb(packed_dict, object_hook=decode_datetime, raw=False)

Unpacker's object_hook callback receives a dict; the object_pairs_hook callback may instead be used to receive a list of key-value pairs.

Extended types

It is also possible to pack/unpack custom data types using the ext type.

>>> import msgpack
>>> import array
>>> def default(obj):
...     if isinstance(obj, array.array) and obj.typecode == 'd':
...         return msgpack.ExtType(42, obj.tostring())
...     raise TypeError("Unknown type: %r" % (obj,))
...
>>> def ext_hook(code, data):
...     if code == 42:
...         a = array.array('d')
...         a.fromstring(data)
...         return a
...     return ExtType(code, data)
...
>>> data = array.array('d', [1.2, 3.4])
>>> packed = msgpack.packb(data, default=default, use_bin_type=True)
>>> unpacked = msgpack.unpackb(packed, ext_hook=ext_hook, raw=False)
>>> data == unpacked
True

Advanced unpacking control

As an alternative to iteration, Unpacker objects provide unpack, skip, read_array_header and read_map_header methods. The former two read an entire message from the stream, respectively de-serialising and returning the result, or ignoring it. The latter two methods return the number of elements in the upcoming container, so that each element in an array, or key-value pair in a map, can be unpacked or skipped individually.

Notes

string and binary type

Early versions of msgpack didn't distinguish string and binary types. The type for representing both string and binary types was named raw.

You can pack into and unpack from this old spec using use_bin_type=False and raw=True options.

>>> import msgpack
>>> msgpack.unpackb(msgpack.packb([b'spam', u'eggs'], use_bin_type=False), raw=True)
[b'spam', b'eggs']
>>> msgpack.unpackb(msgpack.packb([b'spam', u'eggs'], use_bin_type=True), raw=False)
[b'spam', 'eggs']

ext type

To use the ext type, pass msgpack.ExtType object to packer.

>>> import msgpack
>>> packed = msgpack.packb(msgpack.ExtType(42, b'xyzzy'))
>>> msgpack.unpackb(packed)
ExtType(code=42, data='xyzzy')

You can use it with default and ext_hook. See below.

Security

To unpacking data received from unreliable source, msgpack provides two security options.

max_buffer_size (default: 100*1024*1024) limits the internal buffer size. It is used to limit the preallocated list size too.

strict_map_key (default: True) limits the type of map keys to bytes and str. While msgpack spec doesn't limit the types of the map keys, there is a risk of the hashdos. If you need to support other types for map keys, use strict_map_key=False.

Performance tips

CPython's GC starts when growing allocated object. This means unpacking may cause useless GC. You can use gc.disable() when unpacking large message.

List is the default sequence type of Python. But tuple is lighter than list. You can use use_list=False while unpacking when performance is important.

Comments
  • Use new buffer interface to unpack

    Use new buffer interface to unpack

    This PR adds support for unpacking/feeding from any object that supports the new buffer interface.

    For compatibility, an attempt is made to use the old buffer interface if that fails. On success, a RuntimeWarning is issued to inform users of possible errors and future removal of this feature.

    opened by jfolz 25
  • Backward incompatible API change toward 1.0

    Backward incompatible API change toward 1.0

    DRAFT This issue is for write down my ideas.

    Changing default behavior to use new spec.

    See also: https://github.com/msgpack/msgpack/blob/master/spec.md#upgrading-messagepack-specification.

    v1.0

    Drop encoding and unicode_errors option in Unpacker. Add raw=False option instead. When raw=True is passed, raw data in msgpack is deserialized into bytes object.

    Drop encoding and unicode_errors option in Packer. Unicode is encoded with UTF-8 always. 'surrogatepass' is not allowed to keep generated msgpack clean. use_bin_type=False is the only way to create dirty (raw contains non UTF-8 data) msgpack.

    v0.5.x

    Add raw=True option to unpacker.

    v0.6

    0.6 is version for warnings.

    Packer warns when encoding or unicode_errors is specified.

    Unpacker warns when encoding or unicode_errors is specified. ("Use raw=False instead").

    PyPI package name

    In past, easy_install crawls msgpack website and finds msgpack-x.y.z.tar.gz. But it was for C. That's why I moved from msgpack to msgpack-python

    Sadly, pip doesn't support transitional package (empty, but depends on new name package). So I release msgpack as both of msgpack-python and msgpack for a while until 1.0.

    As 1.0, I release msgpack only.

    opened by methane 22
  • Unknown serialization issue

    Unknown serialization issue

    Hello,

    I am having serious problems with database connectivity since yesterday. I've traced the issue down to a piece of code that is passing a long integer that has python's 'L' notation appended to it. This appears to be similar to #114. I am struggling a bit because some chained dependency is explicitly requiring the latest msgpack.

    opened by lordnynex 22
  • High CPU usage when unpacking on Ubuntu 12.04 with msgpack 4.8

    High CPU usage when unpacking on Ubuntu 12.04 with msgpack 4.8

    (gdb) bt
    #0  0x00007fee03ef525d in sem_post () from /lib/x86_64-linux-gnu/libpthread.so.0
    #1  0x000000000056d237 in PyThread_release_lock (lock=0x27bd4f0) at ../Python/thread_pthread.h:346
    #2  0x000000000051a1fd in PyEval_EvalFrameEx (
        f=Frame 0x2ac03c0, for file /usr/local/lib/python2.7/dist-packages/msgpack/fallback.py, line 537, in _fb_unpack (self=<Unpacker(_max_buffer_size=2147483647, _encoding=None, _max_map_len=2147483647, _max_bin_len=2147483647, _max_ext_len=2147483647, _fb_sloppiness=0, _object_hook=None, _fb_buf_o=52492, _fb_buf_n=1866623, _fb_buf_i=22, _unicode_errors='strict', _use_list=True, _max_str_len=2147483647, _ext_hook=<type at remote 0x17ed550>
    

    Recently, I've been observing this strange situation that my script just simply receives msgpack-ed data from stdin and extracts it. However, during that procedure, the script causes sever futex contention. I've used gdb to track it down and found out that there are tons of require and release on the same lock object, e.g. "lock=0x27bd4f0", which seems to me that is used by msgpack, specifically _fb_unpack.

    Just wondering has anyone else observed similar phenomenon ? I have not got time to dig into the _fb_unpack function so currently I have no idea why this function can cause futex contention.

    futex(0x27bd4f0, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
    futex(0x27bd4f0, FUTEX_WAIT_PRIVATE, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
    futex(0x27bd4f0, FUTEX_WAIT_PRIVATE, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 0
    futex(0x27bd4f0, FUTEX_WAKE_PRIVATE, 1) = 1
    
    opened by imcom 20
  • RFC: deprecate write_bytes option in unpackers

    RFC: deprecate write_bytes option in unpackers

    I feel write_bytes option in Unpacker makes implementation complicated. I forgot why I added it. I'll check history and consider another API for the usecase.

    Ideas? comments?

    opened by methane 19
  • Support packing memoryview objects

    Support packing memoryview objects

    I am working on a high-throughput scenario where I need to avoid copying memory wherever possible. I frequently create buffers from existing objects and sometimes need to hand them to msgpack. Unfortunately I could not find a way to pass buffers to the current version of msgpack without copying. So I wrote this small patch to support packing arbitrary buffer objects to binary data. It works well for my use case.

    Things I'm not so sure about:

    • Pure Python fallback is not implemented. Its structure is different from the Cython code and I'm not sure what to do exactly.
    • I'm not an expert in using this library. Does this interfere with any other functionality?
    • Information is lost in the conversion, e.g., shape of Numpy arrays. Since the lib just "swallows" everything that is a buffer. A user might expect to get the same type out again, but gets bytes instead.
    • Strided data is still copied.
    • No test cases yet. Might needs some more type checks.
    opened by jfolz 18
  • use_bin_type - confusing future hint

    use_bin_type - confusing future hint

    string and binary type
    
    Early versions of msgpack didn't distinguish string and binary types (like Python 1).
    

    Ehrm, did you really mean Python 1.x here or rather 2.x?

    The type for representing both string and binary types was named raw.
    
    For backward compatibility reasons, msgpack-python will still default all
    strings to byte strings, unless you specify the use_bin_type=True option
    in the packer.
    

    So that means the current default is still False (or rather 0, from the code).

    If you do so, it will use a non-standard type called bin to serialize byte arrays,
    and raw becomes to mean str. If you want to distinguish bin and raw in the
    unpacker, specify encoding='utf-8'.
    
    In future version, default value of ``use_bin_type`` will be changed to ``False``.
    

    Did you mean True here?

    To avoid this change will break your code, you must specify it explicitly even when you want to use old format.
    
    opened by ThomasWaldmann 17
  • memoryview objects are not fully supported

    memoryview objects are not fully supported

    memoryview is the python3 type for non-owning memory buffer objects, also backported to python 2.7. unpackb and Unpacker.feed should unpack them without copying, and packing functions should handle them the same as bytes. The current support is quite limited due to multiple issues:

    Python 3.3

    >>> msgpack._unpacker.unpackb(memoryview(b'\x91\xc3'))
    [True]
    >>> msgpack.fallback.unpackb(memoryview(b'\x91\xc3'))
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/nofitserov/.local/lib64/python3.3/site-packages/msgpack/fallback.py", line 93, in unpackb
        ret = unpacker._fb_unpack()
      File "/home/nofitserov/.local/lib64/python3.3/site-packages/msgpack/fallback.py", line 383, in _fb_unpack
        typ, n, obj = self._read_header(execute, write_bytes)
      File "/home/nofitserov/.local/lib64/python3.3/site-packages/msgpack/fallback.py", line 274, in _read_header
        b = ord(c)
    TypeError: ord() expected string of length 1, but memoryview found
    >>> msgpack._packer.Packer().pack(memoryview(b'abc'))
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "_packer.pyx", line 224, in msgpack._packer.Packer.pack (msgpack/_packer.cpp:224)
      File "_packer.pyx", line 226, in msgpack._packer.Packer.pack (msgpack/_packer.cpp:226)
      File "_packer.pyx", line 221, in msgpack._packer.Packer._pack (msgpack/_packer.cpp:221)
    TypeError: can't serialize <memory at 0x7f37c3e41460>
    >>> msgpack.fallback.Packer().pack(memoryview(b'abc'))
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/nofitserov/.local/lib64/python3.3/site-packages/msgpack/fallback.py", line 618, in pack
        self._pack(obj)
      File "/home/nofitserov/.local/lib64/python3.3/site-packages/msgpack/fallback.py", line 615, in _pack
        raise TypeError("Cannot serialize %r" % obj)
    TypeError: Cannot serialize <memory at 0x7f37c3e41390>
    

    Python 2.7

    >>> msgpack._unpacker.unpackb(memoryview(b'\x91\xc3'))
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "_unpacker.pyx", line 105, in msgpack._unpacker.unpackb (msgpack/_unpacker.cpp:105)
    TypeError: expected a readable buffer object
    >>> msgpack.fallback.unpackb(memoryview(b'\x91\xc3'))
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/nofitserov/.local/lib64/python2.7/site-packages/msgpack/fallback.py", line 93, in unpackb
        ret = unpacker._fb_unpack()
      File "/home/nofitserov/.local/lib64/python2.7/site-packages/msgpack/fallback.py", line 381, in _fb_unpack
        typ, n, obj = self._read_header(execute, write_bytes)
      File "/home/nofitserov/.local/lib64/python2.7/site-packages/msgpack/fallback.py", line 272, in _read_header
        b = ord(c)
    TypeError: ord() expected string of length 1, but memoryview found
    >>> msgpack._packer.Packer().pack(memoryview(b'abc'))
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "_packer.pyx", line 206, in msgpack._packer.Packer.pack (msgpack/_packer.cpp:206)
      File "_packer.pyx", line 208, in msgpack._packer.Packer.pack (msgpack/_packer.cpp:208)
      File "_packer.pyx", line 203, in msgpack._packer.Packer._pack (msgpack/_packer.cpp:203)
    TypeError: can't serialize <memory at 0x7f506e40ddf8>
    >>> msgpack.fallback.Packer().pack(memoryview(b'abc'))
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/nofitserov/.local/lib64/python2.7/site-packages/msgpack/fallback.py", line 616, in pack
        self._pack(obj)
      File "/home/nofitserov/.local/lib64/python2.7/site-packages/msgpack/fallback.py", line 613, in _pack
        raise TypeError("Cannot serialize %r" % obj)
    TypeError: Cannot serialize <memory at 0x7f506e40ddf8>
    

    The only available workaround right now is to explicitly convert memoryview objects to bytes, needlessly copying the contents, which degrades performance, especially for unpacking large objects.

    opened by himikof 17
  • exception hierarchy and its future

    exception hierarchy and its future

    http://msgpack-python.readthedocs.io/en/latest/api.html#exceptions

    For the upper layer(s) of the current msgpack exception hierarchy, it states that:

    Deprecated. Use Exception instead to catch all exception during packing.
    

    or

    Deprecated. Use ValueError instead.
    

    I am not sure why that is deprecated, for me this feels backwards.

    There are quite a lot of normal use cases where one wants to catch all msgpack exceptions, but one can't just catch Exception (or ValueError).

    For example, look at this, taken from borgbackup code:

    try:
        with IntegrityCheckedFile(hints_path, write=False, integrity_data=integrity_data) as fd:
            hints = msgpack.unpack(fd)
    except (msgpack.UnpackException, FileNotFoundError, FileIntegrityError) as e: 
        ... # the file is not there / is crap, rebuild it.
    

    So, if I'ld use Exception there, it would also catch all sorts of unspecific issues in the IntegrityCheckedFile code - that's a bad idea.

    Of course one could work around by having an additional inner try/except and reraise some specific custom exception, but why not just keep UnpackException for such cases?

    opened by ThomasWaldmann 16
  • Change default decoder limits

    Change default decoder limits

    Current default limits seems too large. It can cause DoS attack. Change default value to more safe value.

    Current plan is "about 1MiB on amd64 system".

    • max_bin_len: 1024*1024
    • max_str_len: 1024*1024
    • max_array_len: 1024*1024/8 (each pointer has 8 bytes)
    • max_map_len: 1024*1024/32 (8byte key,hash,value, and some extra space)
    opened by methane 15
  • Stream processing requires knowledge of the data

    Stream processing requires knowledge of the data

    I was just trying to use the recent updated msgpack library for stream processing but it still requires knowledge of the data that's incoming which I don't have in all cases. What I want is a function that works roughly like this:

    >>> u = msgpack.Unpacker(StringIO('\x94\x01\x02\x03\x04'))
    >>> u.next_marker()
    ('map_start', 4)
    >>> u.next_marker()
    ('value', 1)
    >>> u.next_marker()
    ('value', 2)
    >>> u.next_marker()
    ('value', 3)
    >>> u.next_marker()
    ('value', 4)
    

    Eg: next marker returns a tuple in the form (marker, value) where marker is one of map_start / array_start or value. if it's value it will have the actual value as second item, if it's a container marker then it has the size in there. This would allow trivial stream processing. (a value marker would never contain a map or array, just scalar values).

    opened by mitsuhiko 15
  • Encoding to unsupported byte for machines running Linux

    Encoding to unsupported byte for machines running Linux

    I'm getting the following error when unpacking a packed byte string. What is strange this error doesn't occur on a laptop running on MacOS Ventura but does on Linux machines.

    'utf-8' codec can't decode byte 0x88 in position 0: invalid start byte

    To pack the object, I use:

    msgpack.packb(object, use_bin_type=True,)
    

    To unpack, I use this:

    msgpack.unpackb(object_to_deserialize, raw=False,)
    
    opened by shaon-chowdhury 0
  • Drop Python 2 support

    Drop Python 2 support

    According to pypi page, the python2 support has been dropped with release 1.0. The main branch in the repository is currently v.1.0.4 release (https://github.com/msgpack/msgpack-python/blob/main/msgpack/init.py#L9).

    However, there is python2 specific code in the repository. For example:

    • setup.py checks for python2
    • msgpack/fallback.py could be simplified
    • python2 is checked in tox.ini
    • test/test_timestamp.py skip some tests with python2 (@pytest.mark.skipif(sys.version_info[0] == 2) ; test/test_extension.py has also a conditional

    Are you interested by a PR to remove them?

    opened by sblondon 2
  • 1.0.4: sphinx warnings `reference target not found`

    1.0.4: sphinx warnings `reference target not found`

    First of all currently it is not possible to use straight sphinx-build command to build documentation out of source tree

    + /usr/bin/sphinx-build -n -T -b man docs build/sphinx/man
    Running Sphinx v5.0.1
    making output directory... done
    WARNING: html_static_path entry '_static' does not exist
    building [mo]: targets for 0 po files that are out of date
    building [man]: all manpages
    updating environment: [new config] 3 added, 0 changed, 0 removed
    reading sources... [100%] index
    WARNING: autodoc: failed to import function 'pack' from module 'msgpack'; the following exception was raised:
    No module named 'msgpack'
    WARNING: autodoc: failed to import function 'packb' from module 'msgpack'; the following exception was raised:
    No module named 'msgpack'
    WARNING: autodoc: failed to import function 'unpack' from module 'msgpack'; the following exception was raised:
    No module named 'msgpack'
    WARNING: autodoc: failed to import function 'unpackb' from module 'msgpack'; the following exception was raised:
    No module named 'msgpack'
    WARNING: autodoc: failed to import class 'Packer' from module 'msgpack'; the following exception was raised:
    No module named 'msgpack'
    WARNING: autodoc: failed to import class 'Unpacker' from module 'msgpack'; the following exception was raised:
    No module named 'msgpack'
    WARNING: autodoc: failed to import class 'ExtType' from module 'msgpack'; the following exception was raised:
    No module named 'msgpack'
    WARNING: autodoc: failed to import class 'Timestamp' from module 'msgpack'; the following exception was raised:
    No module named 'msgpack'
    WARNING: autodoc: failed to import module 'exceptions' from module 'msgpack'; the following exception was raised:
    No module named 'msgpack'
    looking for now-outdated files... none found
    pickling environment... done
    checking consistency... done
    writing... python-msgpack.3 { api advanced } /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:8: WARNING: py:func reference target not found: dump
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:8: WARNING: py:func reference target not found: pack
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:12: WARNING: py:func reference target not found: dumps
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:12: WARNING: py:func reference target not found: packb
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:16: WARNING: py:func reference target not found: load
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:16: WARNING: py:func reference target not found: unpack
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:20: WARNING: py:func reference target not found: loads
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:20: WARNING: py:func reference target not found: unpackb
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/advanced.rst:10: WARNING: py:class reference target not found: msgpack.Packer
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/advanced.rst:13: WARNING: py:meth reference target not found: msgpack.Packer.bytes
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/advanced.rst:13: WARNING: py:meth reference target not found: msgpack.Packer.getbuffer
    done
    build succeeded, 21 warnings.
    

    First part of warnings can be fixed by patch like below:

    --- a/docs/conf.py~     2022-06-05 08:01:50.000000000 +0000
    +++ b/docs/conf.py      2022-06-05 08:02:43.878255796 +0000
    @@ -16,7 +16,7 @@
     # If extensions (or modules to document with autodoc) are in another directory,
     # add these directories to sys.path here. If the directory is relative to the
     # documentation root, use os.path.abspath to make it absolute, like shown here.
    -# sys.path.insert(0, os.path.abspath('.'))
    +sys.path.insert(0, os.path.abspath(".."))
    
     # -- General configuration -----------------------------------------------------
     author = "Inada Naoki"
    
    

    This patch fixes what is in the comment and that can of fix is suggested in sphinx example copy.py https://www.sphinx-doc.org/en/master/usage/configuration.html#example-of-configuration-file

    Than .. on building my packages I'm using sphinx-build command with -n switch which shows warmings about missing references. These are not critical issues.

    + /usr/bin/sphinx-build -n -T -b man docs build/sphinx/man
    Running Sphinx v5.0.1
    making output directory... done
    WARNING: html_static_path entry '_static' does not exist
    building [mo]: targets for 0 po files that are out of date
    building [man]: all manpages
    updating environment: [new config] 3 added, 0 changed, 0 removed
    reading sources... [100%] index
    looking for now-outdated files... none found
    pickling environment... done
    checking consistency... done
    writing... python-msgpack.3 { api advanced } /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:8: WARNING: py:func reference target not found: dump
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:12: WARNING: py:func reference target not found: dumps
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:16: WARNING: py:func reference target not found: load
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/docs/api.rst:20: WARNING: py:func reference target not found: loads
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/msgpack/fallback.py:docstring of msgpack.fallback.Packer:: WARNING: py:class reference target not found: callable
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/msgpack/fallback.py:docstring of msgpack.fallback.Unpacker:6: WARNING: py:meth reference target not found: feed
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/msgpack/fallback.py:docstring of msgpack.fallback.Unpacker:: WARNING: py:class reference target not found: callable
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/msgpack/fallback.py:docstring of msgpack.fallback.Unpacker:: WARNING: py:class reference target not found: callable
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/msgpack/ext.py:docstring of msgpack.ext.Timestamp.from_unix:: WARNING: py:class reference target not found: float.
    /home/tkloczko/rpmbuild/BUILD/msgpack-python-1.0.4/msgpack/ext.py:docstring of msgpack.ext.Timestamp.to_datetime:: WARNING: py:class reference target not found: datetime.
    done
    build succeeded, 11 warnings.
    

    You can peak on fixes that kind of issues in other projects https://github.com/latchset/jwcrypto/pull/289 https://github.com/click-contrib/sphinx-click/commit/abc31069 https://github.com/latchset/jwcrypto/pull/289 https://github.com/RDFLib/rdflib-sqlalchemy/issues/95 https://github.com/sissaschool/elementpath/commit/bf869d9e

    opened by kloczek 0
  • Enfoce reuse compliance

    Enfoce reuse compliance

    The https://reuse.software specification is the standard for providing machine-readable licensing and copyright information.

    I believe:

    • machine-readable metadata is a good thing.
    • licensing and copyright matter
    • automation is a good thing.

    Therefore I propose we:

    1. Make this repository compliant with the https://reuse.software/ specification
    2. Enfoce compliance via a github action
    opened by hexagonrecursion 0
  • Segmentation fault when calling getbuffer() on Packer object

    Segmentation fault when calling getbuffer() on Packer object

    Hi there,

    I'm trying to get the internal data of the Packer object in order to avoid unneeded copying, as documented here.

    The code is as follow :

    import msgpack
    
    def do_the_job():
      packer = msgpack.Packer(autoreset=False)
      packer.pack(1)
      return packer.getbuffer()
    
    bytes(do_the_job())
    

    When running this snippet, I get the following error :

    [1]    9018 segmentation fault (core dumped)  python script.py
    

    I am using Ubuntu 18.04.5 LTS together with msgpack 1.0.2.

    Thanks in advance for your help and for your work on this package !

    opened by thibaudmartinez 9
Releases(v1.0.4)
Ultra fast JSON decoder and encoder written in C with Python bindings

UltraJSON UltraJSON is an ultra fast JSON encoder and decoder written in pure C with bindings for Python 3.6+. Install with pip: $ python -m pip insta

3.9k Jan 02, 2023
Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy

orjson orjson is a fast, correct JSON library for Python. It benchmarks as the fastest Python library for JSON and is more correct than the standard j

4.1k Dec 30, 2022
Generic ASN.1 library for Python

ASN.1 library for Python This is a free and open source implementation of ASN.1 types and codecs as a Python package. It has been first written to sup

Ilya Etingof 223 Dec 11, 2022
A lightweight library for converting complex objects to and from simple Python datatypes.

marshmallow: simplified object serialization marshmallow is an ORM/ODM/framework-agnostic library for converting complex datatypes, such as objects, t

marshmallow-code 6.4k Jan 02, 2023
Corset is a web-based data selection portal that helps you getting relevant data from massive amounts of parallel data.

Corset is a web-based data selection portal that helps you getting relevant data from massive amounts of parallel data. So, if you don't need the whole corpus, but just a suitable subset (indeed, a c

13 Nov 10, 2022
Python bindings for the simdjson project.

pysimdjson Python bindings for the simdjson project, a SIMD-accelerated JSON parser. If SIMD instructions are unavailable a fallback parser is used, m

Tyler Kennedy 562 Jan 08, 2023
🦉 Modern high-performance serialization utilities for Python (JSON, MessagePack, Pickle)

srsly: Modern high-performance serialization utilities for Python This package bundles some of the best Python serialization libraries into one standa

Explosion 329 Dec 28, 2022
Crappy tool to convert .scw files to .json and and vice versa.

SCW-JSON-TOOL Crappy tool to convert .scw files to .json and vice versa. How to use Run main.py file with two arguments: python main.py scw2json or j

Fred31 5 May 14, 2021
Python wrapper around rapidjson

python-rapidjson Python wrapper around RapidJSON Authors: Ken Robbins [email prot

469 Jan 04, 2023
serialize all of python

dill serialize all of python About Dill dill extends python's pickle module for serializing and de-serializing python objects to the majority of the b

The UQ Foundation 1.8k Jan 07, 2023
MessagePack serializer implementation for Python msgpack.org[Python]

MessagePack for Python What's this MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JS

MessagePack 1.7k Dec 29, 2022
FlatBuffers: Memory Efficient Serialization Library

FlatBuffers FlatBuffers is a cross platform serialization library architected for maximum memory efficiency. It allows you to directly access serializ

Google 19.6k Jan 01, 2023
Python library for serializing any arbitrary object graph into JSON. It can take almost any Python object and turn the object into JSON. Additionally, it can reconstitute the object back into Python.

jsonpickle jsonpickle is a library for the two-way conversion of complex Python objects and JSON. jsonpickle builds upon the existing JSON encoders, s

1.1k Jan 02, 2023
Extended pickling support for Python objects

cloudpickle cloudpickle makes it possible to serialize Python constructs not supported by the default pickle module from the Python standard library.

1.3k Jan 05, 2023
simplejson is a simple, fast, extensible JSON encoder/decoder for Python

simplejson simplejson is a simple, fast, complete, correct and extensible JSON http://json.org encoder and decoder for Python 3.3+ with legacy suppo

1.5k Dec 31, 2022
Protocol Buffers - Google's data interchange format

Protocol Buffers - Google's data interchange format Copyright 2008 Google Inc. https://developers.google.com/protocol-buffers/ Overview Protocol Buffe

Protocol Buffers 57.6k Jan 03, 2023