A pythonic dependency injection library.

Overview

Pinject

https://pepy.tech/badge/pinject https://pepy.tech/badge/pinject/month

Pinject is a dependency injection library for python.

The primary goal of Pinject is to help you assemble objects into graphs in an easy, maintainable way.

If you are already familiar with other dependency injection libraries, you may want to read the condensed summary section at the end, so that you get an idea of what Pinject is like and how it might differ from libraries you're used to.

There is a changelog of differences between released versions near the end of this README.

Why Pinject?

If you're wondering why to use a dependency injection library at all: if you're writing a lot of object-oriented code in python, then it will make your life easier. See, for instance:

If you're wondering why to use Pinject instead of another python dependency injection library, a few of reasons are:

  • Pinject is much easier to get started with. Forget having to decorate your code with @inject_this and @annotate_that just to get started. With Pinject, you call new_object_graph(), one line, and you're good to go.
  • Pinject is a pythonic dependency injection library. Python ports of other libraries, like Spring or Guice, retain the feel (and verbosity) of being designed for a statically typed language. Pinject is designed from the ground up for python.
  • The design choices in Pinject are informed by several dependency injection experts working at Google, based on many years of experience. Several common confusing or misguided features are omitted altogether from Pinject.
  • Pinject has great error messages. They tell you exactly what you did wrong, and exactly where. This should be a welcome change from other dependency frameworks, with their voluminous and yet inscrutable stack traces.

Look at the simplest getting-started examples for Pinject and for other similar libraries. Pinject should be uniformly easier to use, clearer to read, and less boilerplate that you need to add. If you don't find this to be the case, email!

Installation

The easiest way to install Pinject is to get the latest released version from PyPI:

sudo pip install pinject

If you are interested in the developing version, you can install the next version from Test PyPI:

sudo pip install \
    --no-deps \
    --no-cache \
    --upgrade \
    --index-url https://test.pypi.org/simple/ \
    pinject

You can also check out all the source code, including tests, designs, and TODOs:

git clone https://github.com/google/pinject

Basic dependency injection

The most important function in the pinject module is new_object_graph(). This creates an ObjectGraph, which you can use to instantiate objects using dependency injection. If you pass no args to new_object_graph(), it will return a reasonably configured default ObjectGraph.

>>> class OuterClass(object):
...     def __init__(self, inner_class):
...         self.inner_class = inner_class
...
>>> class InnerClass(object):
...     def __init__(self):
...         self.forty_two = 42
...
>>> obj_graph = pinject.new_object_graph()
>>> outer_class = obj_graph.provide(OuterClass)
>>> print outer_class.inner_class.forty_two
42
>>>

As you can see, you don't need to tell Pinject how to construct its ObjectGraph, and you don't need to put decorators in your code. Pinject has reasonable defaults that allow it to work out of the box.

A Pinject binding is an association between an arg name and a provider. In the example above, Pinject created a binding between the arg name inner_class and an implicitly created provider for the class InnerClass. The binding it had created was how Pinject knew that it should pass an instance of InnerClass as the value of the inner_class arg when instantiating OuterClass.

Implicit class bindings

Pinject creates implicit bindings for classes. The implicit bindings assume your code follows PEP8 conventions: your classes are named in CamelCase, and your args are named in lower_with_underscores. Pinject transforms class names to injectable arg names by lowercasing words and connecting them with underscores. It will also ignore any leading underscore on the class name.

Class name Arg name
Foo foo
FooBar foo_bar
_Foo foo
_FooBar foo_bar

If two classes map to the same arg name, whether those classes are in the same module or different modules, Pinject will not create an implicit binding for that arg name (though it will not raise an error).

Finding classes and providers for implicit bindings

So far, the examples have not told new_object_graph() the classes for which it should create implicit bindings. new_object_graph() by default looks in all imported modules, but you may occasionally want to restrict the classes for which new_object_graph() creates implicit bindings. If so, new_object_graph() has two args for this purpose.

  • The modules arg specifies in which (python) modules to look for classes; this defaults to ALL_IMPORTED_MODULES.
  • The classes arg specifies a exact list of classes; this defaults to None.
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class Foo(object):
...     pass
...
>>> obj_graph = pinject.new_object_graph(modules=None, classes=[SomeClass])
>>> # obj_graph.provide(SomeClass)  # would raise a NothingInjectableForArgError
>>> obj_graph = pinject.new_object_graph(modules=None, classes=[SomeClass, Foo])
>>> some_class = obj_graph.provide(SomeClass)
>>>

Auto-copying args to fields

One thing that can get tedious about dependency injection via initializers is that you need to write __init__() methods that copy args to fields. These __init__() methods can get repetitive, especially when you have several initializer args.

>>> class ClassWithTediousInitializer(object):
...     def __init__(self, foo, bar, baz, quux):
...         self._foo = foo
...         self._bar = bar
...         self._baz = baz
...         self._quux = quux
...
>>>

Pinject provides decorators that you can use to avoid repetitive initializer bodies.

  • @copy_args_to_internal_fields prepends an underscore, i.e., it copies an arg named foo to a field named _foo. It's useful for normal classes.
  • @copy_args_to_public_fields copies the arg named as-is, i.e., it copies an arg named foo to a field named foo. It's useful for data objects.
>>> class ClassWithTediousInitializer(object):
...     @pinject.copy_args_to_internal_fields
...     def __init__(self, foo, bar, baz, quux):
...         pass
...
>>> cwti = ClassWithTediousInitializer('a-foo', 'a-bar', 'a-baz', 'a-quux')
>>> print cwti._foo
'a-foo'
>>>

When using these decorators, you'll normally pass in the body of the initializer, but you can put other statements there if you need to. The args will be copied to fields before the initializer body is executed.

These decorators can be applied to initializers that take **kwargs but not initializers that take *pargs (since it would be unclear what field name to use).

Binding specs

To create any bindings more complex than the implicit class bindings described above, you use a binding spec. A binding spec is any python class that inherits from BindingSpec. A binding spec can do three things:

  • Its configure() method can create explicit bindings to classes or instances, as well as requiring bindings without creating them.
  • Its dependencies() method can return depended-on binding specs.
  • It can have provider methods, for which explicit bindings are created.

The new_object_graph() function takes a sequence of binding spec instances as its binding_specs arg. new_object_graph() takes binding spec instances, rather than binding spec classes, so that you can manually inject any initial dependencies into the binding specs as needed.

Binding specs should generally live in files named binding_specs.py, where each file is named in the plural even if there is exactly one binding spec in it. Ideally, a directory's worth of functionality should be coverable with a single binding spec. If not, you can create multiple binding specs in the same binding_specs.py file. If you have so many binding specs that you need to split them into multiple files, you should name them each with a _binding_specs.py suffix.

Binding spec configure() methods

Pinject creates implicit bindings for classes, but sometimes the implicit bindings aren't what you want. For instance, if you have SomeReallyLongClassName, you may not want to name your initializer args some_really_long_class_name but instead use something shorter like long_name, just for this class.

For such situations, you can create explicit bindings using the configure() method of a binding spec. The configure() method takes a function bind() as an arg and calls that function to create explicit bindings.

>>> class SomeClass(object):
...     def __init__(self, long_name):
...         self.long_name = long_name
...
>>> class SomeReallyLongClassName(object):
...     def __init__(self):
...         self.foo = 'foo'
...
>>> class MyBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('long_name', to_class=SomeReallyLongClassName)
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[MyBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.long_name.foo
'foo'
>>>

The bind() function passed to a binding function binds its first arg, which must be an arg name (as a str), to exactly one of two kinds of things.

  • Using to_class binds to a class. When the binding is used, Pinject injects an instance of the class.
  • Using to_instance binds to an instance of some object. Every time the binding is used, Pinject uses that instance.
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class MyBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='a-foo')
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[MyBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'a-foo'
>>>

The configure() method of a binding spec also may take a function require() as an arg and use that function to require that a binding be present without actually defining that binding. require() takes as args the name of the arg for which to require a binding.

>>> class MainBindingSpec(pinject.BindingSpec):
...     def configure(self, require):
...         require('foo')
...
>>> class RealFooBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='a-real-foo')
...
>>> class StubFooBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='a-stub-foo')
...
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[MainBindingSpec(), RealFooBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'a-real-foo'
>>> # pinject.new_object_graph(
... #    binding_specs=[MainBindingSpec()])  # would raise a MissingRequiredBindingError
...
>>>

Being able to require a binding without defining the binding is useful when you know the code will need some dependency satisfied, but there is more than one implementation that satisfies that dependency, e.g., there may be a real RPC client and a fake RPC client. Declaring the dependency means that any expected but missing bindings will be detected early, when new_object_graph() is called, rather than in the middle of running your program.

You'll notice that the configure() methods above have different signatures, sometimes taking the arg bind and sometimes taking the arg require. configure() methods must take at least one arg that is either bind or require, and they may have both args. Pinject will pass whichever arg or args your configure() method needs.

Binding spec dependencies

Binding specs can declare dependencies. A binding spec declares its dependencies by returning a sequence of instances of the dependent binding specs from its dependencies() method.

>>> class ClassOne(object):
...    def __init__(self, foo):
...        self.foo = foo
...
>>> class BindingSpecOne(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='foo-')
...
>>> class ClassTwo(object):
...     def __init__(self, class_one, bar):
...         self.foobar = class_one.foo + bar
...
>>> class BindingSpecTwo(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('bar', to_instance='-bar')
...     def dependencies(self):
...         return [BindingSpecOne()]
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[BindingSpecTwo()])
>>> class_two = obj_graph.provide(ClassTwo)
>>> print class_two.foobar
'foo--bar'
>>>

If classes from module A are injected as collaborators into classes from module B, then you should consider having the binding spec for module B depend on the binding spec for module A. In the example above, ClassOne is injected as a collaborator into ClassTwo, and so BindingSpecTwo (the binding spec for ClassTwo) depends on BindingSpecOne (the binding spec for ClassOne).

In this way, you can build a graph of binding spec dependencies that mirrors the graph of collaborator dependencies.

Since explicit bindings cannot conflict (see the section below on binding precedence), a binding spec should only have dependencies that there will never be a choice about using. If there may be a choice, then it is better to list the binding specs separately and explicitly when calling new_object_graph().

The binding spec dependencies can be a directed acyclic graph (DAG); that is, binding spec A can be a dependency of B and of C, and binding spec D can have dependencies on B and C. Even though there are multiple dependency paths from D to A, the bindings in binding spec A will only be evaluated once.

The binding spec instance of A that is a dependency of B is considered the same as the instance that is a dependency of C if the two instances are equal (via __eq__()). The default implementation of __eq__() in BindingSpec says that two binding specs are equal if they are of exactly the same python type. You will need to override __eq__() (as well as __hash__()) if your binding spec is parameterized, i.e., if it takes one or more initializer args so that two instances of the binding spec may behave differently.

>>> class SomeBindingSpec(pinject.BindingSpec):
...     def __init__(self, the_instance):
...         self._the_instance = the_instance
...     def configure(self, bind):
...         bind('foo', to_instance=self._the_instance)
...     def __eq__(self, other):
...         return (type(self) == type(other) and
...                 self._the_instance == other._the_instance)
...     def __hash__(self):
...         return hash(type(self)) ^ hash(self._the_instance)
...
>>>

Provider methods

If it takes more to instantiate a class than calling its initializer and injecting initializer args, then you can write a provider method for it. Pinject can use provider methods to instantiate objects used to inject as the values of other args.

Pinject looks on binding specs for methods named like provider methods and then creates explicit bindings for them.

>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_foo(self):
...         return 'some-complex-foo'
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'some-complex-foo'
>>>

Pinject looks on binding specs for methods whose names start with provide_, and it assumes that the methods are providers for whatever the rest of their method names are. For instance, Pinject assumes that the method provide_foo_bar() is a provider method for the arg name foo_bar.

Pinject injects all args of provider methods that have no default when it calls the provider method.

>>> class SomeClass(object):
...     def __init__(self, foobar):
...         self.foobar = foobar
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_foobar(self, bar, hyphen='-'):
...         return 'foo' + hyphen + bar
...     def provide_bar(self):
...         return 'bar'
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foobar
'foo-bar'
>>>

Binding precedence

Bindings have precedence: explicit bindings take precedence over implicit bindings.

  • Explicit bindings are the bindings that come from binding specs.
  • Implicit bindings are the bindings created for classes in the modules and classes args passed to new_object_graph().

Pinject will prefer an explicit to an implicit binding.

>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class Foo(object):
...     pass
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='foo-instance')
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'foo-instance'
>>>

If you have two classes named the same thing, Pinject will have two different (and thus conflicting) implicit bindings. But Pinject will not complain unless you try to use those bindings. Pinject will complain if you try to create different (and thus conflicting) explicit bindings.

Safety

Pinject tries to strike a balance between being helpful and being safe. Sometimes, you may want or need to change this balance.

new_object_graph() uses implicit bindings by default. If you worry that you may accidentally inject a class or use a provider function unintentionally, then you can make new_object_graph() ignore implicit bindings, by setting only_use_explicit_bindings=True. If you do so, then Pinject will only use explicit bindings.

If you want to promote an implicit binding to be an explicit binding, you can annotate the corresponding class with @inject(). The @inject() decorator lets you create explicit bindings without needing to create binding specs, as long as you can live with the binding defaults (e.g., no annotations on args, see below).

>>> class ExplicitlyBoundClass(object):
...     @pinject.inject()
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class ImplicitlyBoundClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='explicit-foo')
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()],
...     only_use_explicit_bindings=True)
>>> # obj_graph.provide(ImplicitlyBoundClass)  # would raise a NonExplicitlyBoundClassError
>>> some_class = obj_graph.provide(ExplicitlyBoundClass)
>>> print some_class.foo
'explicit-foo'
>>>

You can also promote an implicit binding to explicit by using @annotated_arg() (see below), with or without @inject() as well.

(Previous versions of Pinject included an @injectable decorator. That is deprecated in favor of @inject(). Note that @inject() needs parens, whereas @injectable didn't.)

On the opposite side of permissiveness, Pinject by default will complain if a provider method returns None. If you really want to turn off this default behavior, you can pass allow_injecting_none=True to new_object_graph().

Annotations

Pinject annotations let you have different objects injected for the same arg name. For instance, you may have two classes in different parts of your codebase named the same thing, and you want to use the same arg name in different parts of your codebase.

On the arg side, an annotation tells Pinject only to inject using a binding whose binding key includes the annotation object. You can use @annotate_arg() on an initializer, or on a provider method, to specify the annotation object.

On the binding side, an annotation changes the binding so that the key of the binding includes the annotation object. When using bind() in a binding spec's configure() method, you can pass an annotated_with arg to specify the annotation object.

>>> class SomeClass(object):
...     @pinject.annotate_arg('foo', 'annot')
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', annotated_with='annot', to_instance='foo-with-annot')
...         bind('foo', annotated_with=12345, to_instance='12345-foo')
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'foo-with-annot'
>>>

Also on the binding side, when defining a provider method, you can use the @provides() decorator. The decorator lets you pass an annotated_with arg to specify the annotation object. The decorator's first param, arg_name also lets you override what arg name you want the provider to be for. This is optional but useful if you want the same binding spec to have two provider methods for the same arg name but annotated differently. (Otherwise, the methods would need to be named the same, since they're for the same arg name.)

>>> class SomeClass(object):
...     @pinject.annotate_arg('foo', 'annot')
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.provides('foo', annotated_with='annot')
...     def provide_annot_foo(self):
...         return 'foo-with-annot'
...     @pinject.provides('foo', annotated_with=12345)
...     def provide_12345_foo(self):
...         return '12345-foo'
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'foo-with-annot'
>>>

When requiring a binding, via the require arg passed into the configure() method of a binding spec, you can pass the arg annotated_with to require an annotated binding.

>>> class MainBindingSpec(pinject.BindingSpec):
...     def configure(self, require):
...         require('foo', annotated_with='annot')
...
>>> class NonSatisfyingBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_instance='an-unannotated-foo')
...
>>> class SatisfyingBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', annotated_with='annot', to_instance='an-annotated-foo')
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[MainBindingSpec(), SatisfyingBindingSpec()])  # works
>>> # obj_graph = pinject.new_object_graph(
... #     binding_specs=[MainBindingSpec(),
... #                    NonSatisfyingBindingSpec()])  # would raise a MissingRequiredBindingError
>>>

You can use any kind of object as an annotation object as long as it implements __eq__() and __hash__().

Scopes

By default, Pinject remembers the object it injected into a (possibly annotated) arg, so that it can inject the same object into other args with the same name. This means that, for each arg name, a single instance of the bound-to class, or a single instance returned by a provider method, is created by default.

>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_foo(self):
...         return object()
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class_1 = obj_graph.provide(SomeClass)
>>> some_class_2 = obj_graph.provide(SomeClass)
>>> print some_class_1.foo is some_class_2.foo
True
>>>

In some cases, you may want to create new instances, always or sometimes, instead of reusing them each time they're injected. If so, you want to use scopes.

A scope controls memoization (i.e., caching). A scope can choose to cache never, sometimes, or always.

Pinject has two built-in scopes. Singleton scope (SINGLETON) is the default and always caches. Prototype scope (PROTOTYPE) is the other built-in option and does no caching whatsoever.

Every binding is associated with a scope. You can specify a scope for a binding by decorating a provider method with @in_scope(), or by passing an in_scope arg to bind() in a binding spec's configure() method.

>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.provides(in_scope=pinject.PROTOTYPE)
...     def provide_foo(self):
...         return object()
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> some_class_1 = obj_graph.provide(SomeClass)
>>> some_class_2 = obj_graph.provide(SomeClass)
>>> print some_class_1.foo is some_class_2.foo
False
>>>

If a binding specifies no scope explicitly, then it is in singleton scope. Implicit class bindings are always in singleton scope.

Memoization of class bindings works at the class level, not at the binding key level. This means that, if you bind two arg names (or the same arg name with two different annotations) to the same class, and the class is in a memoizing scope, then the same class instance will be provided when you inject the different arg names.

>>> class InjectedClass(object):
...     pass
...
>>> class SomeObject(object):
...     def __init__(self, foo, bar):
...         self.foo = foo
...         self.bar = bar
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_class=InjectedClass)
...         bind('bar', to_class=InjectedClass)
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[SomeBindingSpec()])
>>> some_object = obj_graph.provide(SomeObject)
>>> print some_object.foo is some_object.bar
True
>>>

Pinject memoizes class bindings this way because this is more likely to be what you mean if you bind two different arg names to the same class in singleton scope: you want only one instance of the class, even though it may be injected in multiple places.

Provider bindings

Sometimes, you need to inject not just a single instance of some class, but rather you need to inject the ability to create instances on demand. (Clearly, this is most useful when the binding you're using is not in the singleton scope, otherwise you'll always get the same instance, and you may as well just inject that..)

You could inject the Pinject object graph, but you'd have to do that dependency injection manually (Pinject doesn't inject itself!), and you'd be injecting a huge set of capabilities when your class really only needs to instantiate objects of one type.

To solve this, Pinject creates provider bindings for each bound arg name. It will look at the arg name for the prefix provide_, and if it finds that prefix, it assumes you want to inject a provider function for whatever the rest of the arg name is. For instance, if you have an arg named provide_foo_bar, then Pinject will inject a zero-arg function that, when called, provides whatever the arg name foo_bar is bound to.

>>> class Foo(object):
...   def __init__(self):
...     self.forty_two = 42
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def configure(self, bind):
...         bind('foo', to_class=Foo, in_scope=pinject.PROTOTYPE)
...
>>> class NeedsProvider(object):
...     def __init__(self, provide_foo):
...         self.provide_foo = provide_foo
...
>>> obj_graph = pinject.new_object_graph(binding_specs=[SomeBindingSpec()])
>>> needs_provider = obj_graph.provide(NeedsProvider)
>>> print needs_provider.provide_foo() is needs_provider.provide_foo()
False
>>> print needs_provider.provide_foo().forty_two
42
>>>

Pinject will always look for the provide_ prefix as a signal to inject a provider function, anywhere it injects dependencies (initializer args, binding spec provider methods, etc.). This does mean that it's quite difficult, say, to inject an instance of a class named ProvideFooBar into an arg named provide_foo_bar, but assuming you're naming your classes as noun phrases instead of verb phrases, this shouldn't be a problem.

Watch out: don't confuse

  • provider bindings, which let you inject args named provide_something with provider functions; and
  • provider methods, which are methods of binding specs that provide instances of some arg name.

Partial injection

Provider bindings are useful when you want to create instances of a class on demand. But a zero arg provider function will always return an instance configured the same way (within a given scope). Sometimes, you want the ability to parameterize the provided instances, e.g., based on run-time user configuration. You want the ability to create instances where part of the initialization data is provided per-instance at run-time and part of the initialization data is injected as dependencies.

To do this, other dependency injection libraries have you define factory classes. You inject dependencies into the factory class's initializer function, and then you call the factory class's creation method with the per-instance data.

>>> class WidgetFactory(object):
...     def __init__(self, widget_polisher):
...         self._widget_polisher = widget_polisher
...     def new(self, color):  # normally would contain some non-trivial code...
...         return some_function_of(self._widget_polisher, color)
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_something_with_colored_widgets(self, colors, widget_factory):
...         return SomethingWithColoredWidgets(
...             [widget_factory.new(color) for color in colors])
...
>>>

You can follow this pattern in Pinject, but it involves boring boilerplate for the factory class, saving away the initializer-injected dependencies to be used in the creation method. Plus, you have to create yet another ...Factory class, which makes you feel like you're programming in java, not python.

As a less repetitive alternative, Pinject lets you use partial injection on the provider functions returned by provider bindings. You use the @inject() decorator to tell Pinject ahead of time which args you expect to pass directly (vs. automatic injection), and then you pass those args directly when calling the provider function.

>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.inject(['widget_polisher'])
...     def provide_widget(self, color, widget_polisher):
...         return some_function_of(widget_polisher, color)
...     def provide_something_needing_widgets(self, colors, provide_widget):
...         return SomethingNeedingWidgets(
...             [provide_widget(color) for color in colors])
...
>>>

The first arg to @inject(), arg_names, specifies which args of the decorated method should be injected as dependencies. If specified, it must be a non-empty sequence of names of the decorated method's args. The remaining decorated method args will be passed directly.

In the example above, note that, although there is a method called provide_widget() and an arg of provide_something_needing_widgets() called provide_widget, these are not exactly the same! The latter is a dependency-injected wrapper around the former. The wrapper ensures that the color arg is passed directly and then injects the widget_polisher dependency.

The @inject() decorator works to specify args passed directly both for provider bindings to provider methods (as in the example above) and for provider bindings to classes (where you can pass args directly to the initializer, as in the example below).

>>> class Widget(object):
...     @pinject.inject(['widget_polisher'])
...     def __init__(self, color, widget_polisher):
...         pass  # normally something involving color and widget_polisher
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def provide_something_needing_widgets(self, colors, provide_widget):
...         return SomethingNeedingWidgets(
...             [provide_widget(color) for color in colors])
...
>>>

The @inject() decorator also takes an all_except arg. You can use this, instead of the (first positional) arg_names arg, if it's clearer and more concise to say which args are not injected (i.e., which args are passed directly).

>>> class Widget(object):
...     # equivalent to @pinject.inject(['widget_polisher']):
...     @pinject.inject(all_except=['color'])
...     def __init__(self, color, widget_polisher):
...         pass  # normally something involving color and widget_polisher
...
>>>

If both arg_names and all_except are omitted, then all args are injected by Pinject, and none are passed directly. (Both arg_names and all_except may not be specified at the same time.) Wildcard positional and keyword args (i.e., *pargs and **kwargs) are always passed directly, not injected.

If you use @inject() to mark at least one arg of a provider method (or initializer) as passed directly, then you may no longer directly inject that provider method's corresponding arg name. You must instead use a provider binding to inject a provider function, and then pass the required direct arg(s), as in the examples above.

Custom scopes

If you want to, you can create your own custom scope. A custom scope is useful when you have some objects that need to be reused (i.e., cached) but whose lifetime is shorter than the entire lifetime of your program.

A custom scope is any class that implements the Scope interface.

class Scope(object):
    def provide(self, binding_key, default_provider_fn):
        raise NotImplementedError()

The binding_key passed to provide() will be an object implementing __eq__() and __hash__() but otherwise opaque (you shouldn't need to introspect it). You can think of the binding key roughly as encapsulating the arg name and annotation (if any). The default_provider_fn passed to provide() is a zero-arg function that, when called, provides an instance of whatever should be provided.

The job of a scope's provide() function is to return a cached object if available and appropriate, otherwise to return (and possibly cache) the result of calling the default provider function.

Scopes almost always have other methods that control clearing the scope's cache. For instance, a scope may have "enter scope" and "exit scope" methods, or a single direct "clear cache" method. When passing a custom scope to Pinject, your code should keep a handle to the custom scope and use that handle to clear the scope's cache at the appropriate time.

You can use one or more custom scopes by passing a map from scope identifier to scope as the id_to_scope arg of new_object_graph().

>>> class MyScope(pinject.Scope):
...     def __init__(self):
...         self._cache = {}
...     def provide(self, binding_key, default_provider_fn):
...         if binding_key not in self._cache:
...             self._cache[binding_key] = default_provider_fn()
...         return self._cache[binding_key]
...     def clear(self):
...         self._cache = {}
...
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.provides(in_scope='my custom scope')
...     def provide_foo(self):
...         return object()
...
>>> my_scope = MyScope()
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[SomeBindingSpec()],
...     id_to_scope={'my custom scope': my_scope})
>>> some_class_1 = obj_graph.provide(SomeClass)
>>> some_class_2 = obj_graph.provide(SomeClass)
>>> my_scope.clear()
>>> some_class_3 = obj_graph.provide(SomeClass)
>>> print some_class_1.foo is some_class_2.foo
True
>>> print some_class_2.foo is some_class_3.foo
False
>>>

A scope identifier can be any object implementing __eq__() and __hash__().

If you plan to use Pinject in a multi-threaded environment (and even if you don't plan to now but may some day), you should make your custom scope thread-safe. The example custom scope above could be trivially (but more verbosely) rewritten to be thread-safe, as in the example below. The lock is reentrant so that something in MyScope can be injected into something else in MyScope.

>>> class MyScope(pinject.Scope):
...     def __init__(self):
...         self._cache = {}
...         self._rlock = threading.RLock()
...     def provide(self, binding_key, default_provider_fn):
...         with self._rlock:
...             if binding_key not in self._cache:
...                 self._cache[binding_key] = default_provider_fn()
...             return self._cache[binding_key]
...     def clear(self):
...         with self._rlock:
...             self._cache = {}
>>>

Scope accessibility

To prevent yourself from injecting objects where they don't belong, you may want to validate one object being injected into another w.r.t. scope.

For instance, you may have created a custom scope for HTTP requests handled by your program. Objects in request scope would be cached for the duration of a single HTTP request. You may want to verify that objects in request scope never get injected into objects in singleton scope. Such an injection is likely not to make semantic sense, since it would make something tied to one HTTP request be used for the duration of your program.

Pinject lets you pass a validation function as the is_scope_usable_from_scope arg to new_object_graph(). This function takes two scope identifiers and returns True iff an object in the first scope can be injected into an object of the second scope.

>>> class RequestScope(pinject.Scope):
...     def start_request(self):
...         self._cache = {}
...     def provide(self, binding_key, default_provider_fn):
...         if binding_key not in self._cache:
...             self._cache[binding_key] = default_provider_fn()
...         return self._cache[binding_key]
...
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     @pinject.provides(in_scope=pinject.SINGLETON)
...     def provide_foo(bar):
...         return 'foo-' + bar
...     @pinject.provides(in_scope='request scope')
...     def provide_bar():
...         return '-bar'
...
>>> def is_usable(scope_id_inner, scope_id_outer):
...     return not (scope_id_inner == 'request scope' and
...                 scope_id_outer == scoping.SINGLETON)
...
>>> my_request_scope = RequestScope()
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[SomeBindingSpec()],
...     id_to_scope={'request scope': my_request_scope},
...     is_scope_usable_from_scope=is_usable)
>>> my_request_scope.start_request()
>>> # obj_graph.provide(SomeClass)  # would raise a BadDependencyScopeError
>>>

The default scope accessibility validator allows objects from any scope to be injected into objects from any other scope.

Changing naming conventions

If your code follows PEP8 naming coventions, then you're likely happy with the default implicit bindings (where the class FooBar gets bound to the arg name foo_bar) and where provide_foo_bar() is a binding spec's provider method for the arg name foo_bar.

But if not, read on!

Customizing implicit bindings

new_object_graph() takes a get_arg_names_from_class_name arg. This is the function that is used to determine implicit class bindings. This function takes in a class name (e.g., FooBar) and returns the arg names to which that class should be implicitly bound (e.g., ['foo_bar']). Its default behavior is described in the "implicit class bindings" section above, but that default behavior can be overridden.

For instance, suppose that your code uses a library that names many classes with the leading letter X (e.g., XFooBar), and you'd like to be able to bind that to a corresponding arg name without the leading X (e.g., foo_bar).

>>> import re
>>> def custom_get_arg_names(class_name):
...     stripped_class_name = re.sub('^_?X?', '', class_name)
...     return [re.sub('(?!^)([A-Z]+)', r'_\1', stripped_class_name).lower()]
...
>>> print custom_get_arg_names('XFooBar')
['foo_bar']
>>> print custom_get_arg_names('XLibraryClass')
['library_class']
>>> class OuterClass(object):
...     def __init__(self, library_class):
...         self.library_class = library_class
...
>>> class XLibraryClass(object):
...     def __init__(self):
...         self.forty_two = 42
...
>>> obj_graph = pinject.new_object_graph(
...     get_arg_names_from_class_name=custom_get_arg_names)
>>> outer_class = obj_graph.provide(OuterClass)
>>> print outer_class.library_class.forty_two
42
>>>

The function passed as the get_arg_names_from_class_name arg to new_object_graph() can return as many or as few arg names as it wants. If it always returns the empty list (i.e., if it is lambda _: []), then that disables implicit class bindings.

Customizing binding spec method names

The standard binding spec methods to configure bindings and declare dependencies are named configure and dependencies, by default. If you need to, you can change their names by passing configure_method_name and/or dependencies_method_name as args to new_object_graph().

>>> class NonStandardBindingSpec(pinject.BindingSpec):
...     def Configure(self, bind):
...         bind('forty_two', to_instance=42)
...
>>> class SomeClass(object):
...     def __init__(self, forty_two):
...         self.forty_two = forty_two
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[NonStandardBindingSpec()],
...     configure_method_name='Configure')
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.forty_two
42
>>>

Customizing provider method names

new_object_graph() takes a get_arg_names_from_provider_fn_name arg. This is the function that is used to identify provider methods on binding specs. This function takes in the name of a potential provider method (e.g., provide_foo_bar) and returns the arg names for which the provider method is a provider, if any (e.g., ['foo_bar']). Its default behavior is described in the "provider methods" section above, but that default behavior can be overridden.

For instance, suppose that you work for a certain large corporation whose python style guide makes you name functions in CamelCase, and so you need to name the provider method for the arg name foo_bar more like ProvideFooBar than provide_foo_bar.

>>> import re
>>> def CustomGetArgNames(provider_fn_name):
...     if provider_fn_name.startswith('Provide'):
...         provided_camelcase = provider_fn_name[len('Provide'):]
...         return [re.sub('(?!^)([A-Z]+)', r'_\1', provided_camelcase).lower()]
...     else:
...         return []
...
>>> print CustomGetArgNames('ProvideFooBar')
['foo_bar']
>>> print CustomGetArgNames('ProvideFoo')
['foo']
>>> class SomeClass(object):
...     def __init__(self, foo):
...         self.foo = foo
...
>>> class SomeBindingSpec(pinject.BindingSpec):
...     def ProvideFoo(self):
...         return 'some-foo'
...
>>> obj_graph = pinject.new_object_graph(
...     binding_specs=[SomeBindingSpec()],
...     get_arg_names_from_provider_fn_name=CustomGetArgNames)
>>> some_class = obj_graph.provide(SomeClass)
>>> print some_class.foo
'some-foo'
>>>

The function passed as the get_arg_names_from_provider_fn_name arg to new_object_graph() can return as many or as few arg names as it wants. If it returns an empty list, then that potential provider method is assumed not actually to be a provider method.

Miscellaneous

Pinject raises helpful exceptions whose messages include the file and line number of errors. So, Pinject by default will shorten the stack trace of exceptions that it raises, so that you don't see the many levels of function calls within the Pinject library.

In some situations, though, the complete stack trace is helpful, e.g., when debugging Pinject, or when your code calls Pinject, which calls back into your code, which calls back into Pinject. In such cases, to disable exception stack shortening, you can pass use_short_stack_traces=False to new_object_graph().

Gotchas

Pinject has a few things to watch out for.

Thread safety

Pinject's default scope is SINGLETON. If you have a multi-threaded program, it's likely that some or all of the things that Pinject provides from singleton scope will be used in multiple threads. So, it's important that you ensure that such classes are thread-safe.

Similarly, it's important that your custom scope classes are thread-safe. Even if the objects they provide are only used in a single thread, it may be that the object graph (and therefore the scope itself) will be used simultaneously in multiple threads.

Remember to make locks re-entrant on your custom scope classes, or otherwise deal with one object in your custom scope trying to inject another object in your custom scope.

That's it for gotchas, for now.

Condensed summary

If you are already familiar with dependency injection libraries such as Guice, this section gives you a condensed high level summary of Pinject and how it might be similar to or different than other dependency injection libraries. (If you don't understand it, no problem. The rest of the documentation covers everything listed here.)

  • Pinject uses code and decorators to configure injection, not a separate config file.
  • Bindings are keyed by arg name, (not class type, since Python is dynamically typed).
  • Pinject automatically creates bindings to some_class arg names for SomeClass classes.
  • You can ask Pinject only to create bindings from binding specs and classes whose __init__() is marked with @inject().
  • A binding spec is a class that creates explicit bindings.
  • A binding spec can bind arg names to classes or to instances.
  • A binding spec can bind arg names foo to provider methods provide_foo().
  • Binding specs can depend on (i.e., include) other binding specs.
  • You can annotate args and bindings to distinguish among args/bindings for the same arg name.
  • Pinject has two built-in scopes: "singleton" (always memoized; the default) and "prototype" (never memoized).
  • You can define custom scopes, and you can configure which scopes are accessible from which other scopes.
  • Pinject doesn't allow injecting None by default, but you can turn off that check.

Changelog

v0.15: master

  • Enable GitHub Actions
  • CI/CD DevOps for publishing to PyPI automatically
  • A version which the minor number is odd will be published as a prerelease and add dev to the patch version. (E.g. 0.15.0 will be published as 0.15.dev0 because the minor number 15 is odd)
  • Remove Python version 3.3 & 3.4 from CI/CD #50

v0.12: 28 Nov, 2018

  • Support Python 3
  • Add two maintainers: @trein and @huan

v0.10.2:

  • Fixed bug: allows binding specs containing only provider methods.

v0.10.1:

  • Fixed bug: allows omitting custom named configure() binding spec method.

v0.10:

  • Added default __eq__() to BindingSpec, so that DAG binding spec dependencies can have equal but not identical dependencies.
  • Allowed customizing configure() and dependencies() binding spec method names.
  • Deprecated @injectable in favor of @inject.
  • Added partial injection.
  • Added require arg to allow binding spec configure methods to declare but not define bindings.
  • Sped up tests (and probably general functionality) by 10x.
  • Documented more design decisions.
  • Added @copy_args_to_internal_fields and @copy_args_to_public_fields.
  • Renamed InjectableDecoratorAppliedToNonInitError to DecoratorAppliedToNonInitError.

v0.9:

  • Added validation of python types of public args.
  • Improved error messages for all Pinject-raised exceptions.
  • Added use_short_stack_traces arg to new_object_graph().
  • Allowed multiple @provides on single provider method.

v0.8:

  • First released version.

Author

Kurt Steinkraus @kurt

Maintainers

License

Apache-2.0

Pinject and Google

Though Google owns this project's copyright, this project is not an official Google product.

Comments
  • Unable to import pinject

    Unable to import pinject

    Hello, I decided to give pinject a try, but unfortunately it does not import. Here's what I do: install pinject with pip install pinject and then interactively import pinject.

    It seem to fail on line 156 of third_party/decorator.py. I'm on python 3.4 so maybe that's the reason although looking at code I can see blocks dealing with python >3.2

    Is that a bug?

    opened by dmitrybelyakov 12
  • setup.py loads version information from VERSION

    setup.py loads version information from VERSION

    Previously, setup.py would import the version information from pinject.version. This would trigger pinject's init.py to import and load, which would take a roundabout path to importing the decorator dependency. The setup.py script is launched by pip while inspecting the package and dependencies, so this problem will come up in a package sourcing pinject as a dependency even if decorator is declared as a dependency in that package.

    Referencing: https://packaging.python.org/guides/single-sourcing-package-version/#single-sourcing-the-version

    The previous method was similar to number 6 (setting the value of version and then importing it). It was changed to number 4 (placing the version in a VERSION file that is read from disk).

    Given there is a parent VERSION file that is connected to the Makefile, this file is now sourced by setup.py. To ensure it is available when installing pinject, the VERSION file was also added to the manifest. The existing version.py convention is kept for regular use by pinject since that file is readily available to the package when running its source normally.

    This should fix issue #35.

    opened by rockobonaparte 10
  • Handle find_classes_in_module when module.__bases__ is not a tuple

    Handle find_classes_in_module when module.__bases__ is not a tuple

    Issue with import of tensorboard library (and perceivably some others too)

    For context, the issue arose when I attempted to use tensorboard (a dependecy of tensorflow) for a machine learning project.

    Importing tensorflow in any of the files that are imported to build the object graph, results in the following error:

    Traceback (most recent call last):
      File "main.py", line 9, in <module>
        OBJ_GRAPH = pinject.new_object_graph()
      File "/Users/CCCP/.config/python_vi/tf/lib/python3.7/site-packages/pinject/object_graph.py", line 100, in new_object_graph
        found_classes = finding.find_classes(modules, classes)
      File "/Users/CCCP/.config/python_vi/tf/lib/python3.7/site-packages/pinject/finding.py", line 32, in find_classes
        all_classes |= _find_classes_in_module(module)
      File "/Users/CCCP/.config/python_vi/tf/lib/python3.7/site-packages/pinject/finding.py", line 46, in _find_classes_in_module
        for member_name, member in inspect.getmembers(module):
      File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/inspect.py", line 364, in getmembers
        for base in object.__bases__:
    TypeError: 'int' object is not iterable
    

    Recreation

    A bare-bone recreation of the bug can be achieved with

    import pinject
    import tensorflow as tf
    
    class ExampleClass:
        pass
    
    OBJ_GRAPH = pinject.new_object_graph()
    example_class = OBJ_GRAPH.provide(ExampleClass)
    

    and the following requirements file:

    requirements.txt
    absl-py==0.9.0
    astunparse==1.6.3
    attrs==19.3.0
    cachetools==4.1.0
    certifi==2020.6.20
    chardet==3.0.4
    decorator==4.4.2
    gast==0.3.3
    google-auth==1.18.0
    google-auth-oauthlib==0.4.1
    google-pasta==0.2.0
    grpcio==1.30.0
    h5py==2.10.0
    idna==2.9
    importlib-metadata==1.7.0
    Keras-Preprocessing==1.1.2
    Markdown==3.2.2
    mock==4.0.2
    more-itertools==8.4.0
    numpy==1.19.0
    oauthlib==3.1.0
    opt-einsum==3.2.1
    packaging==20.4
    pinject==0.14.1
    pluggy==0.13.1
    protobuf==3.12.2
    py==1.9.0
    pyasn1==0.4.8
    pyasn1-modules==0.2.8
    pyparsing==2.4.7
    pytest==5.4.3
    requests==2.24.0
    requests-oauthlib==1.3.0
    rsa==4.6
    scipy==1.4.1
    semver==2.10.2
    six==1.15.0
    tensorboard==2.2.2
    tensorboard-plugin-wit==1.6.0.post3
    tensorflow==2.2.0
    tensorflow-estimator==2.2.0
    termcolor==1.1.0
    urllib3==1.25.9
    wcwidth==0.2.5
    Werkzeug==1.0.1
    wrapt==1.12.1
    zipp==3.1.0
    

    Issue

    The error arises inside the inspect.getmembers() function called in finding.py:

    • It attempts to iterate through the module.__bases__ attribute of the module that is passed into it from finding.py
    • However running the following import in module.__bases__ attribute being an integer - not a tuple that is expected
    # tensorboard 2.2.2
    from tensorboard.compat.tensorflow_stub import pywrap_tensorflow
    assert pywrap_tensorflow.__bases__ == 0
    

    This is somewhat peculiar, as an attempt to override __bases__ of a class:

    class ExampleClass(object):
               pass
    ExampleClass.__bases__ = 0
    

    throws TypeError: can only assign tuple to SomeClass.__bases__, not int

    def _find_classes_in_module(module):
        classes = set()
        for member_name, member in inspect.getmembers(module):
            try:
                if inspect.isclass(member) and not member_name == "__class__":
                    classes.add(member)
            except NameError:
                # In Python 3 calling isinstance() on SWIG's global cvar property
                # raises:
                #     "NameError: Unknown C global variable"
                # In that case just continue, otherwise let the Error through.
                if not member_name == "cvar":
                    raise
        return classes
    

    Patch

    Since such a __bases__ issue can foreseeably occur in other libraries, my proposition is for a simple patch that would pre-empt module.__bases__ being of an incorrect format, and set an empty tuple so as to bypass this problem, but allow the rest of inspect.getmembers() to execute as expected.

    def _find_classes_in_module(module):
        classes = set()
    
        try:
            if not isinstance(module.__bases__, tuple):
                module.__bases__ = ()
        except AttributeError:
            pass
    
        for member_name, member in inspect.getmembers(module):
            try:
                if inspect.isclass(member) and not member_name == '__class__':
                    classes.add(member)
            except NameError:
                # In Python 3 calling isinstance() on SWIG's global cvar property
                # raises:
                #     "NameError: Unknown C global variable"
                # In that case just continue, otherwise let the Error through.
                if not member_name == 'cvar':
                    raise
        return classes
    
    
    opened by creamy-seas 9
  • Getting FileNotFoundError: [Errno 2] No such file or directory: '<my-path>/site-packages/pinject/../VERSION'

    Getting FileNotFoundError: [Errno 2] No such file or directory: '/site-packages/pinject/../VERSION'

    I am getting this error when installing the latest version (0.12.2) with pip install pinject:

    File "/home/ronen/reach/reach_venv3.6/lib/python3.6/site-packages/pinject/version.py", line 10, in VERSION = open(VERSION_FILE).read().strip() FileNotFoundError: [Errno 2] No such file or directory: '/home/ronen/reach/reach_venv3.6/lib/python3.6/site-packages/pinject/../VERSION'

    So I have looked and there isn't really any VERSION file in my site-packages folder (should there be?). I am running python3.6.7 on Ubuntu 18.04. We have been running with django and pinject for a long time now and everything was great.

    Please help, thanks in advance!

    bug 
    opened by rmiller-sgntr 9
  • Allow custom metaclasses by relaxing _verify_type

    Allow custom metaclasses by relaxing _verify_type

    Hiya, I made a small change to get Pinject to work with Cloud Endpoints where I wanted to provide my API class which inherits from remote.Service which in turn has _ServiceClass set as metaclass.

    So _verify_type(cls, types.TypeType, 'cls') on object_graph.py:234 (where cls is subclass of remote.Service will fail because its type is _ServiceClass and not the required types.TypeType. I've changed the check the verify that the type is a subclass of required type rather than requiring an exact match.

    This makes it works for me but honestly I don't yet know if it has any side effects down the road.

    Cheers for a nice lib. :tada:

    opened by mollstam 8
  • Publishing pinject v0.12 with Python 3 support

    Publishing pinject v0.12 with Python 3 support

    Hi there(who are looking for the Python3 support of pinject for a long time)!

    We had just merged #19 (thanks @trein for the great work!) and staging a test version v0.11 on test.pypi.org.

    Please install the dev version of pinject by running the following pip command:

    pip install \
      --no-deps \
      --no-cache \
      --upgrade \
      --index-url https://test.pypi.org/simple/ \
      pinject
    

    Then let us know whether it works under your environment.

    We will publish the v0.12 version to pypi.org as an official release after we confirmed the v0.11 is work as expected by collecting enough confirmation replies under this issue.

    Thanks for your help and let's looking forward to the v0.12 release!

    enhancement 
    opened by huan 7
  • [issue #7] Introduce support to Python 3.x

    [issue #7] Introduce support to Python 3.x

    • Add Six package, a simple utilities for wrapping over differences between Python 2 and Python 3
    • Encapsulate compatibility methods in 'support.py' module
    • Introduce unit tests for 'support.py' module

    Review needed for two failing tests on Python 2.7.x and 3.x:

    • test_unknown (pinject.locations_test.GetClassNameAndLocTest)
    • test_unknown (pinject.locations_test.GetTypeLocTest)
    enhancement 
    opened by trein 7
  • latest release on pypi is 0.14.1 but on github it's 0.15.3

    latest release on pypi is 0.14.1 but on github it's 0.15.3

    Is there a reason for skipping Deploy here?

    The last release on pypi is 0.14.1, which is missing some fixes like #46

    Publishing the latest version to pypi.org would be very much appreciated!

    Thanks for the great tool.

    question 
    opened by MrGreenTea 2
  • Catch NameError resulting from calling isinstance() on SWIG global cv…

    Catch NameError resulting from calling isinstance() on SWIG global cv…

    …ar in Python 3.

    I couldn't think of a way to add a unit test without requiring a SWIG dependency, and other examples of handling this issue didn't provide any examples (https://github.com/numpy/numpy/commit/51a1895234b25b8bf0ace24b8ee881a91c39601c). WDYT?

    opened by dthkao 2
  • Remove Python v3.3 & v3.4 from CI/CD

    Remove Python v3.3 & v3.4 from CI/CD

    Because it seems that both the versions are deprecated.

    GitHub Actions link

    ##[error]Version 3.3 with arch x64 not found
    ##[error]Version 3.4 with arch x64 not found
    Available versions:
    
    2.7.17 (x64)
    3.5.9 (x64)
    3.6.10 (x64)
    3.7.6 (x64)
    3.8.2 (x64)
    

    Travis CI link

    Downloading archive: https://storage.googleapis.com/travis-ci-language-archives/python/binaries/ubuntu/16.04/x86_64/python-3.3.tar.bz2
    0.11s$ curl -sSf --retry 5 -o python-3.3.tar.bz2 ${archive_url}
    curl: (22) The requested URL returned error: 404 Not Found
    Unable to download 3.3 archive. The archive may not exist. Please consider a different version.
    
    bug 
    opened by huan 1
  • ObjectGraph Builder

    ObjectGraph Builder

    Hi, I've developed a builder for having a multiple step configurations, or a configuration manager that can be imported from different places, configured and in the end can build the object graph.

    I've made a different repo, since I wanted to use it immediately for my own project, however I'm opening this ticket as a suggested, if you are interested in integrating it with the original package, I'd be happy to port it, open a PR and shut down the additional package.

    This is the package, and please check the documentation for use cases. https://github.com/eshta/object-graph-builder

    Please let me know your thoughts, and thanks :)

    opened by 0mars 1
  • Feature request:

    Feature request: "Local" bindings without annotating non-binding-spec code

    Problem

    The only way to bind the same name to two different objects depending on where they'll be used seems to be to annotate the "non-binding-spec" code to make the two parameters distinguishable.

    But annotations of the non-binding-spec code go against pinject's unique selling point, which is that you can leave your regular code completely untouched. In fact, the very first point of the "Why pinject?" README section says:

    [...] Forget having to decorate your code with @inject_this and @annotate_that just to get started. With Pinject, you call new_object_graph(), one line, and you're good to go.

    Annotations to avoid name collisions brings back that exact kind of @annotate_that mess people who like pinject want to avoid.

    So this is a feature request to come up with and implement an alternative to annotations to avoid name collisions and have certain bindings only apply "locally", e.g. only to one specific class.

    Some ideas

    Extra parameter to bind

    The feature could take the form of e.g. an extra parameter to bind that allows you to choose a specific requesting class for which it is applied:

    class SomeBindingSpec:
      def configure(bind):
        bind("common_name", to_class=SomeBoundClass, local_to=SomeRequestingClass)
    
    class SomeOtherBindingSpec:
      def configure(bind):
        bind("common_name", to_class=SomeOtherBoundClass, local_to=SomeOtherRequestingClass)
    

    Decorated binding specs

    Another, perhaps more flexible way to do this would be to instead implement locality on the level of binding specs, e.g.:

    @pinject.local_binding_spec(requesters=[SomeRequestingClass])
    class SomeBindingSpec:
      def configure(bind):
        bind("common_name", to_class=SomeBoundClass)
    
    @pinject.local_binding_spec(requesters=[SomeOtherRequestingClass])
    class SomeOtherBindingSpec:
      def configure(bind):
        bind("common_name", to_class=SomeOtherBoundClass)
    

    Composable binding specs and object graphs

    Another idea that would allow one to more easily work around the issue would be if binding specs and object graphs were more "composable" than they are now, so you could e.g. create a common object graph without collisions and two separate object graphs each consisting of the common one plus the bindings that would otherwise collide, then use those to provide the conflicting requesting classes separately. pinject would however have to ensure that both object graphs return the exact same instances for the common parts, just like it does normally within the same object graph.

    Thoughts?

    opened by sh-at-cs 0
  • [Enhancement] Allow calling methods and injecting their parameters

    [Enhancement] Allow calling methods and injecting their parameters

    Problem

    A lot of times I just want to call a function which has arguments that need to be injected. In order to call the function like thisI basically have to create a wrapper class for the function which specifies the arguments required, and then call some method on the wrapper to pass in the injected arguments. This works but is a lot of boilerplate. e.g.

    def foobar(foo: Foo) -> int:
        ...
        
    class FoobarWrapper:
        def __init__(foo: Foo) -> None:
            self.foo = foo
        def call(self): -> int:
            return foobar(self.foo)
    
    obj_graph = new_object_graph()
    print("foobar returns:", obj_graph.provide(FoobarWrapper).call())
    

    Solution

    I want to be able to call functions with arbitrary parameters and have pinject construct all of the necessary inputs for me. It seems like the library is set up nicely to support this, however, it requires me to access private members of the ObjectGraph instance.

    Proof of Concept

    After browsing through the implementation, I've been able to achieve my goal with the following 3 line hack. I provide a helper method around it to keep things simple and type-safe.

    from typing import TypeVar, Callable
    from pinject.object_graph import ObjectGraph
    
    T = TypeVar("T")
    
    
    def inject_func(obj_graph: ObjectGraph, func: Callable[..., T]) -> T:
        context = obj_graph._injection_context_factory.new(func)
        args, kwargs = obj_graph._obj_provider.get_injection_pargs_kwargs(func, context, [], {})
        return func(*args, **kwargs)
    

    And here's how you'd call it:

    class Bar:
        def __init__(self) -> None:
            self.a = 1
    
    
    class Foo:
        def __init__(self, bar: Bar) -> None:
            self.bar = bar
    
    
    def foobar(foo: Foo) -> int:
        return foo.bar.a
    
    
    obj_graph = new_object_graph()
    print("foobar returns:", inject_func(obj_graph, foobar))
    
    foobar returns: 1
    

    Desired Implementation

    What I'd like to see is something like this at the same level as ObjectGraph.provide. Maybe ObjectGraph.invoke

    obj_graph = new_object_graph()
    print("foobar returns:", obj_graph.invoke(foobar))
    
    opened by cal-pratt 0
  • Undefined name: errors

    Undefined name: errors

    Where is errors defined? % flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics

    ./pinject/pinject/__init__.py:32:23: F821 undefined name 'errors'
    for thing_name in dir(errors):
                          ^
    ./pinject/pinject/__init__.py:33:21: F821 undefined name 'errors'
        thing = getattr(errors, thing_name)
                        ^
    2     F821 undefined name 'errors'
    2
    
    opened by cclauss 0
  • Inject dependency into decorators

    Inject dependency into decorators

    Hello,

    How can I use Pinject to inject dependencies into decorators?

    For example, say I have a decorator like:

    # foo.py
    def cache_decorator(cache_client):
        def decorator(func):
            def wrapper(*args, **kwargs):
                cache_client.read()
                result = func(*args, **kwargs)
                cache_client.write
                return result
            return wrapper
        return decorator
    
    class A:
      @cache_decorator(dependency_I_want_to_inject)
      def foo(bar):
        bar.baz()
    

    The goal is to swap the cache_client with a mock for unit testing.

    Is something like this possible with Pinject?

    opened by tc8 0
  • What's the release strategy?

    What's the release strategy?

    I see there are prerelease versions out there since March of 2020. The last stable version release was May 2019. Are there any intentions to release the PRs from the past 18 months as a stable release?

    opened by shawnsparks-work 3
  • pinject.errors.OnlyInstantiableViaProviderFunctionError:Simple Sample Script  but not know how to fix the error

    pinject.errors.OnlyInstantiableViaProviderFunctionError:Simple Sample Script but not know how to fix the error

    [[email protected] ~]# python3 Python 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux Type "help", "copyright", "credits" or "license" for more information.

    import pinject from enum import Enum

    class SendKind(Enum): ... text = 't' ... link = 'l' ... class test(object): ... @pinject.copy_args_to_internal_fields ... def init(self,send_kind): ... pass ... def demo(self): ... print(self._send_kind.text) ... obj_graph = pinject.new_object_graph() test_class = obj_graph.provide(test) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/pinject/scoping.py", line 62, in provide return self._binding_key_to_instance[binding_key] KeyError: <the binding name "send_kind" (unannotated)>

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 51, in provide_from_arg_binding_key provided = provider_indirection.StripIndirectionIfNeeded(Provide) File "/usr/local/lib/python3.6/site-packages/pinject/provider_indirections.py", line 26, in StripIndirectionIfNeeded return provide_fn() File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 43, in Provide lambda: binding.proviser_fn(child_injection_context, self, File "/usr/local/lib/python3.6/site-packages/pinject/scoping.py", line 64, in provide instance = default_provider_fn() File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 44, in pargs, kwargs)) File "/usr/local/lib/python3.6/site-packages/pinject/bindings.py", line 264, in Proviser to_class, injection_context, pargs, kwargs) File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 70, in provide_class return cls(*init_pargs, **init_kwargs) TypeError: call() missing 1 required positional argument: 'value'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/site-packages/pinject/object_graph.py", line 203, in provide raise e File "/usr/local/lib/python3.6/site-packages/pinject/object_graph.py", line 200, in provide direct_init_pargs=[], direct_init_kwargs={}) File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 66, in provide_class direct_init_pargs, direct_init_kwargs) File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 83, in get_injection_pargs_kwargs lambda abk: self.provide_from_arg_binding_key( File "/usr/local/lib/python3.6/site-packages/pinject/arg_binding_keys.py", line 108, in create_kwargs for arg_binding_key in arg_binding_keys} File "/usr/local/lib/python3.6/site-packages/pinject/arg_binding_keys.py", line 108, in for arg_binding_key in arg_binding_keys} File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 84, in fn, abk, injection_context)) File "/usr/local/lib/python3.6/site-packages/pinject/object_providers.py", line 58, in provide_from_arg_binding_key binding.get_binding_target_desc_fn()) pinject.errors.OnlyInstantiableViaProviderFunctionError: when injecting test.init, the arg named "send_kind" unannotated cannot be injected, because its provider, the class main.SendKind, needs at least one directly passed arg

    opened by o1dcat 0
Releases(v0.15.3)
  • v0.15.3(Apr 10, 2020)

  • v0.12(Nov 27, 2018)

  • v0.10.2(Aug 16, 2013)

  • v0.10.1(Aug 15, 2013)

  • v0.10(Aug 2, 2013)

    Since v0.9:

    • Added default __eq__() to BindingSpec, so that DAG binding spec dependencies can have equal but not identical dependencies.
    • Allowed customizing configure() and dependencies() binding spec method names.
    • Deprecated @injectable in favor of @inject.
    • Added partial injection.
    • Added require arg to allow binding spec configure methods to declare but not define bindings.
    • Sped up tests (and probably general functionality) by 10x.
    • Documented more design decisions.
    • Added @copy_args_to_internal_fields and @copy_args_to_public_fields.
    • Renamed InjectableDecoratorAppliedToNonInitError to DecoratorAppliedToNonInitError.
    Source code(tar.gz)
    Source code(zip)
  • v0.9(Jul 8, 2013)

    Last (anticipated) release before v1.0.

    Since v0.8:

    • Added validation of python types of public args.
    • Improved error messages for all Pinject-raised exceptions.
    • Added use_short_stack_traces arg to new_object_graph().
    • Allowed multiple @provides on single provider method.
    Source code(tar.gz)
    Source code(zip)
  • v0.8(Jul 6, 2013)

Owner
Google
Google ❤️ Open Source
Google
Local backup made easy, with Python and shutil

KTBackup BETA Local backup made easy, with Python and shutil Features One-command backup and restore Minimalistic (only using stdlib) Convenient direc

kelptaken 1 Dec 27, 2021
A simple dork generator written in python that outputs dorks with the domain extensions you enter

Dork Gen A simple dork generator written in python that outputs dorks with the domain extensions you enter in a ".txt file". Usage The code is pretty

Z3NToX 4 Oct 30, 2022
Hot reloading for Python

Hot reloading for Python

Olivier Breuleux 769 Jan 03, 2023
Export watched content from Tautulli to the Letterboxd CSV Import Format

Export watched content from Tautulli to the Letterboxd CSV Import Format

Evan J 5 Aug 31, 2022
Cardano Stakepools: Check for scheduled blocks in current epoch.

ReLeaderLogs For Cardano Stakepool Operators: Lightweight Scheduled Blocks Checker for Current Epoch. No cardano-node Required, data is taken from blo

SNAKE (Cardano Stakepool) 2 Oct 19, 2021
✨ Un juste prix totalement fait en Python par moi, et en français.

Juste Prix ❗ Un juste prix totalement fait en Python par moi, et en français. 🔮 Avec l'utilisation du module "random", j'ai pu faire un choix aléatoi

MrGabin 3 Jun 06, 2021
Various importers for cointracker

cointracker_importers Various importers for cointracker To convert nexo .csv format to cointracker .csv format: Download nexo csv file. run python Nex

Stefanos Anastasiou 9 Oct 24, 2022
This repository contains some utilities for playing with PKINIT and certificates.

PKINIT tools This repository contains some utilities for playing with PKINIT and certificates. The tools are built on minikerberos and impacket. Accom

Dirk-jan 395 Dec 27, 2022
This is a package that allows you to create a key-value vault for storing variables in a global context

This is a package that allows you to create a key-value vault for storing variables in a global context. It allows you to set up a keyring with pre-defined constants which act as keys for the vault.

Data Ductus 2 Dec 14, 2022
Runes - Simple Cookies You Can Extend (similar to Macaroons)

Runes - Simple Cookies You Can Extend (similar to Macaroons) is a paper called "Macaroons: Cookies with Context

Rusty Russell 22 Dec 11, 2022
A meme error handler for python

Pwython OwO what's this? Pwython is project aiming to fill in one of the biggest problems with python, which is that it is slow lacks owoified text. N

SystematicError 23 Jan 15, 2022
Python program for analyzing the output files of phonopy.

PhononTools Description Python program to analyze the results generated by phonopy. Using the .yaml and .dat files that phonopy generates one can plot

Harry LaBollita 8 Nov 27, 2022
A time table app to notify the user about their class timings

kivyTimeTable A time table app to notify the user about their class timings Features This project incorporates some features i wanted to see in a time

2 Dec 15, 2021
Raganarok X: Next Generation Data Dump

Raganarok X Data Dump Raganarok X: Next Generation Data Dump More interesting Files File Name Contains en_langs All the variables you need in English

14 Jul 15, 2022
This code renames subtitle file names to your video files names, so you don't need to rename them manually.

Rename Subtitle This code renames your subtitle file names to your video file names so you don't need to do it manually Note: It only works for series

Mostafa Kazemi 4 Sep 12, 2021
New time-based UUID formats which are suited for use as a database key

uuid6 New time-based UUID formats which are suited for use as a database key. This module extends immutable UUID objects (the UUID class) with the fun

26 Dec 30, 2022
Bounding Boxes Python Utils

Bounding Boxes Python Utils

Vadim 4 May 01, 2022
✨ Un pierre feuille ciseaux totalement fait en Python par moi, et en français.

Pierre Feuille Ciseaux ❗ Un pierre feuille ciseaux totalement fait en Python par moi. 🔮 Avec l'utilisation du module "random", j'ai pu faire un choix

MrGabin 3 Jun 06, 2021
DUQ is a python package for working with physical Dimensions, Units, and Quantities.

DUQ is a python package for working with physical Dimensions, Units, and Quantities.

2 Nov 02, 2022
A library from RCTI+ to handle RabbitMQ tasks (connect, send, receive, etc) in Python.

Introduction A library from RCTI+ to handle RabbitMQ tasks (connect, send, receive, etc) in Python. Requirements Python =3.7.3 Pika ==1.2.0 Aio-pika

Dali Kewara 1 Feb 05, 2022