Showing posts with label Python programming. Show all posts
Showing posts with label Python programming. Show all posts

Monday, March 26, 2018

Context managers in Python

A context manager in Python is a way to manage resources. Typical uses of context managers include locking and unlocking resources like database connections and closing, opening files.

Context managers are used together with the with statement. You probably encountered the with statement at some point when coding in Python, the most often used case is file handling
with open('data.dat', 'r') as file: 
   contents = file.read()
However, files do not have to be opened like this, instead, you could also use
file = open('data.dat', 'r')
contents = file.read()
file.close()
The problem with this code is that the file.close() method might never be called if an exception happened after the opening of the file. If the file is kept open, you have a resource leak, and the system may slow down or crash since the number of available file handlers is finite.

A more robust version of dealing with a file could use the try... finally statement like this
file = open('data.dat', 'r')
try:
   contents = file.read()
finally:
   file.close()
So even if there is an exception while reading the file, the finally clause always ensures that the file is closed again. This is basically how the with statement is implemented.

The with statement has two minor advantages over the try... finally statement above: (1) it is a little bit shorter and (2) it does not require calling the close() method explicitly, since the with statement calls it automatically, whenever the program leaves the with statement (indented block below the with statement).

So the main purpose of the with statement is to guarantee the execution of startup and cleanup actions around a block of code, with the main application being resource management.

So how would we write a context manager? We can generate this functionality using a class
class MyOpen(object):
   def __init__(self, filename):
      self.filename = filename

   def __enter__(self):
      self.file = open(self.filename)
      return self.file

   def __exit__(self, ctx_type, ctx_value, ctx_traceback):
      self.file.close()
      return True
This class has two methods which are specifically used by the with statement, the __enter__() and __exit__() methods. The __enter__() method is called at the beginning of the with statement. The __exit__() method is called when leaving the with statement and could handle exceptions, even though this is ignored in the above example.

You might wonder what is the difference between  __init__() and __enter__(). The __init__() method is called when the class object is created, which does not need to be the beginning of the with statement. This difference allows us to produce reusable context managers like this
file_handler = MyOpen('data.dat')
with file_handler as file:
    contents = file.read()
In this case, the __init__() method is called in line one, when the class object is created, the __enter__() is called in line two when the with statement begins and the __exit__() method is called in line three when the with statement ends.

So to produce a context manager you just have to create a class with two special methods: __enter__() and __exit__(). The __enter__() method returns the resource to be managed, like a file object in the case of open(). The __exit__() method does any cleanup work.

Hope that was useful and please let me know if you have any comments or questions in the comment section below.
cheers
Florian

Friday, February 16, 2018

Understanding the super() function in Python

In this blog post, I will explain how the super() function in Python works. The super function is a built-in Python function and can be used within a class to gain access to inherited methods from a parent class that has been overwritten.

So let's look at an example. Assume we want to build a dictionary class which has all properties of dict, but additionally allows us to write to logger. This can be done by defining a class which inherits from the dict class and overwrites the functions with new functions which do the same as the old, but additionally have the logging call. The new dict class would look like this
class LoggingDict(dict):
    def __setitem__(self, key, value):
        logging.info("Setting key %s to %s" % (key, value))
        super().__setitem__(key, value)
    def __getitem__(self, key):
        logging.info("Access key %s" % key)
        super().__getitem__(key)
Here we overwrite the __getitem__ and __setitem__ methods with new ones, but we just want to add a logging functionality and keep the functionality of the original functions. Note that we do not need super() to do this since we could get the same result with
class LoggingDict(dict): 
    def __setitem__(self, key, value): 
        logging.info("Setting key %s to %s" % (key, value))
        dict.__setitem__(self, key, value)
    def __getitem__(self, key): 
        logging.info("Access key %s" % key)
        dict.__getitem__(self, key)
The advantage of super() is that, should you decide that you want to inherit from a different class, you would only need to change the first line of the class definition, while the explicit use of the parent class requires you to go through the entire class and change the parent class name everywhere, which can become quite cumbersome for large classes. So super() makes your code more maintainable.

However, the functionality of super() gets more complicated if you inherit from multiple classes and if the function you refer to is present in more than one of these parent classes. Since the parent class is not explicitly declared, which parent class is addressed by super()?

The super() function considers an order of the inherited classes and goes through that ordered list until it finds the first match. The ordered list is known as the Method Resolution Order (MRO). You can see the MRO of any class as
>>> dict.__mro__
(<type 'dict'>, <type 'object'>)
The use of the MRO in the super() function can lead to very different results in the case of multiple inheritances, compared to the explicit declaration of the parent class. Let's go through another example where we define a Bird class which represents the parent class for the Parrot class and the Hummingbird class:
class Bird(object): 
    def __init__(self): 
        print("Bird init") 

class Parrot(Bird):
    def __init__(self):
        print("Parrot init")
        Bird.__init__(self) 

class Hummingbird(Bird):
    def __init__(self): 
        print("Hummingbird init")
        super(Hummingbird, self).__init__()
Here we used the explicit declaration of the parent class in the Parrot class, while in the Hummingbird class we use super(). From this, I will now construct an example where the Parrot and Hummingbird classes will behave differently because of the super() function.

Let's create a FlyingBird class which handles all properties of flying birds. Non-flying birds like ostriches would not inherit from this class:
class FlyingBird(Bird):
    def __init__(self):
        print("FlyingBird init")
        super(FlyingBird, self).__init__()
Now we produce child classes of Parrot and Hummingbird, which specify specific types of these animals. Remember, Hummingbird uses super, Parrot does not:
class Cockatoo(Parrot, FlyingBird):
    def __init__(self):
        print("Cockatoo init")
        super(Cockatoo, self).__init__()

class BeeHummingbird(Hummingbird, FlyingBird):
    def __init__(self):
        print("BeeHummingbird init")
        super(BeeHummingbird, self).__init__()
If we now initiate an instance of Cockatoo we will find that it will not call the __init__ function of the FlyingBird class
>>> Cockatoo() 
Cockatoo init 
Parrot init 
Bird init 
while an initiation of a BeeHummingbird instance does
>>> BeeHummingbird()
BeeHummingbird init 
Hummingbird init 
FlyingBird init 
Bird init 
To understand the order of calls you might want to look at the MRO
>>> print(BeeHummingbird.__mro__)
(<class 'BeeHummingbird'>, <class 'Hummingbird'>, <class 'FlyingBird'>,
<class 'Bird'>, <type 'object'>)
This is an example where not using super() is causing a bug in the class initiation since all our Cockatoo instances will miss the initiation functionality of the FlyingBird class. It clearly demonstrates that the use of super() goes beyond just avoiding explicit declarations of a parent class within another class.

Just as a side note before we finish, the syntax for the super() function has changed between Python 2 and Python 3. While the Python 2 version requires an explicit declaration of the arguments, as used in this post, Python 3 now does all this implicitly, which changes the syntax from (Python 2)
super(class, self).method(args)
to
super().method(args)
I hope that was useful. Let me know if you have any comments/questions. Note that there are very useful discussions of this topic on stack-overflow and in this blog post.
cheers
Florian

Wednesday, February 7, 2018

Python best coding practices: Seven tips for better code

The Python software foundation makes detailed recommendations about naming conventions and styles in the PEP Style Guide. Here is a short summary of that fairly long document, picking seven points I found useful.

The suggested coding and naming conventions sometimes make the code more robust, but often are just for the purpose of making the code more readable. Keep in mind that usually, readability is the most critical aspect of a business level codebase, since any improvement first requires understanding the existing code.

1. A good Python coding practice is to use the floor operator // wherever possible. For example, I used to index arrays and lists like this
nums[int(N/2)]
but this can be done much faster with the floor operator
nums[N//2]
You can test that with
import time
time0 = time.time()
for i in range(0, 10000):
    5//2
print(time.time()-time0)
which in my case gave $0.0006051$, while my old approach
import time
time0 = time.time()
for i in range(0, 10000):
    int(5/2)
print(time.time()-time0)
takes $0.002234$. The reason why the floor operator is faster is that it is pre-calculated, while the usual division is not. Read more about it here.

2.  Next, how can we test whether a variable x is None or not. The following four cases could be used
# bad
if x:

# better
if x != None:

# better
if not x is None:

# best
if x is not None:
The recommended case is the last version. Case 1 is dangerous. While None is mapped to false in a boolean context, many other values also convert to false (like '0' or empty strings). Therefore, if it is really None which you want to test for, you should explicitly write it. So while avoiding case 1 makes your code more robust, avoiding 2 and 3 is just following coding conventions in Python.

The difference between the unequal (!=) and 'is not' operators is subtle, but important. The != operator tests whether the variable has a different value than None, while the 'is not' operator tests whether the two variables point to different objects. In the case above it makes no difference, but still the best practice, in this case, is to use the 'is not' operator.

Following from the example above there are also recommendations for how to test boolean values. If x is True or False, test for it like this
if x:
but not 
if x == True: 
or
if x is True:

3. Another case which makes your code more readable is the use of startswith() and endswith() instead of string splicing. For example use
if foo.startswith('bar'):
instead of
if foo[:3] == 'bar':
Both versions give the same result, and both versions should be robust, but the first version is deemed more readable.

4. To check for types use the isinstance() function
if isinstance(x, int):
rather than the 'is' or == operator
if type(x) == int:

if type(x) is int:
The type operator can easily give you the wrong answer. Take the following example
class MyDict(dict): 
  pass
x = MyDict()
print("type = ", type(x))
print(type(x) == dict)
print(isinstance(x, dict))
which gives the output
('type = ', <class '__main__.MyDict'>)
False
True
Even though the MyDict class behaves just like a dictionary, since it inherited all its properties, the type operator does not see it that way, while the isinstance() function notices that this class has all the functionality of a dict.

5. When using try-except statements always catch exceptions by name
try: 
    x = a/b
except ValueError: 
    #do something
You can use try-except statements to catch all possible exceptions, for example, to ensure that the user gets clean feedback but if you use general try-except statements, you usually write sloppy code, since you have not thought about what exceptions could happen. If you use general except statements, catch the exception and write it to a log file.

6. To break up long lines the preferred methods is to use brackets and to line up the broken lines
if first_condition and second_condition and third_condition and fourth_condition:
should be
if (first_condition and second_condition and
    third_condition and fourth_condition):

7. Finally, you should follow naming conventions:
  • Class and Exception names should use capitalized words without underscore like GroupName() or ExceptionName
  • Function, module, variable and method names should be lowercase, with words separated by underscores like process_text() and global_var_name
  • Constants are written in all capital letters with underscores separating words like MAX_OVERFLOW and TOTAL
A good way to enforce PEP style standards in your project is to use a tool like pylint. Pylint can easily be hooked into a git project, preventing commits which do not follow these standards. In my experience, this is a very good idea when working in a big team with very different experience levels.

I hope this summary was useful. Let me know if you have any questions/comments below.
cheers
Florian

Wednesday, January 31, 2018

The N+1 problem in relational databases and eager loading options in SQLalchemy

In the first part of this post, I will explain the N+1 problem in relational databases. The second part will show how one can deal with this problem in Flask-SQLalchemy.

Assume you have a user table and each user has bought a number of products, listed in a sales table. Let's assume you want a list of all products bought by users in the last 2 days. You can do that by querying the user table to get all users
SELECT * FROM User;
and then for each user, you query the number of products they have bought
SELECT * FROM Sales WHERE user_id = ? AND date > two_days_ago
In other words, you have one Select statement for the user table followed by N additional Select statements to get the associated products, where N is the total number of users.

Each access to the database has a certain overhead which means that the procedure above will scale quite badly with the number of users.

Give that, for each user we intend to perform the same query, there should be a way to do this more efficiently rather than using N+1 calls to the database. Most object-relational mappers (ORMs) such as SQLalchemy, give you several tools to deal with this problem. The SQLalchemy manual discusses this issue here.

However, I certainly noticed that ORMs hide the N+1 problem. If you would write pure SQL, you would directly see the number of Select statements you submit, while in SQLalchemy, it is not obvious when the data is loaded and how many times the database is accessed. So it is quite important to understand the loading options in SQLalchemy and use them appropriately.

Let's build the example discussed above using Flask-SQLalchemy
class User(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    ...

    products = db.relationship('product',
                               secondary=sales)

class Product(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    date = db.Column(db.DateTime)
    ...

sales = db.Table(
    'sales',
    db.Column('product_id', db.Integer, db.ForeignKey('product.id')),
    db.Column('user_id', db.Integer, db.ForeignKey('user.id'))
)
Here we have a User and a Product table and the helper table 'sales' establishes a many-to-many relationship between the two tables. Using the helper table I added a products attribute to each user object, which allows us to access all products this user bought.

We now have two main choices for how we can implement the products relationship (db.relationship above). The default setting of SQLalchemy is lazy loading, which means that the related rows in the sales table are not loaded together with the user object. This is a reasonable choice for a default setting, but not what we want here. Lazy loading means we will run into the N+1 problem since whenever we want to have the products for each user we have to run another N database calls.

To solve this problem we have to make use of the lazy keyword, which by default is set to 'select' like this
products = db.relationship('product',
                           secondary=sales,
                           lazy='select')
So let's go through the options we have for this keyword. One very useful option is lazy='dynamic' like this
products = db.relationship('product',
                           secondary=sales,
                           lazy='dynamic')
which means that user.products actually returns a query object rather than table rows. This gives you a lot of flexibility how the second database access might look. For example, you could add additional filters like this
user.products.filter_by(Products.date > two_days_ago).all()
This can be very useful e.g. if there are many rows in the products table, the additional filter might make the second database access much quicker. But this is of course not solving our problem since it definitely needs a second database access. To avoid the N+1 problem we need to load the rows in the sales table together with the users.

To do this we have to make use of eager loading and SQLalchemy provides three choices for that: joined loading (lazy='joined'), subquery loading (lazy='subquery') and select IN loading (lazy='selectin').

Let's start with the subquery loading since that one is often a good choice. Here two Select statements are run, one to retrieve all users and one to retrieve all related rows in the sales table. So rather N+1 we have 2 Select statements.

Alternatively, joined loading squeezes everything into one Select statement. So we save another Select statement compared to subquery loading, but the downside is that if the products table is large, this can become very slow since it makes use of an out left join.

Finally, the selectin loading is the newest option in SQLalchemy and the recommended one according to the docs. It only queries 500 rows per Select statement, so if you need to retrieve many products, you might end up with several Select statements, but in my experience, this option works very well.

Note that most of the time you do not want to add this loading option to the mapping, but instead set the loading style at runtime. You can do that with joinedload(), subqueryload(), selectinload() and lazyload(), which can be imported as
from sqlalchemy.orm import subqueryload, joinedload, selectinload
and used like 
users = User.query.options(selectinload(User.products)).all()
I hope that was useful. Let me know if you have any question/comments below.
best
Florian

Monday, January 22, 2018

Understanding the self variable in Python classes

When defining a class in Python you will probably have encountered the self variable. Here is a simple example
class Cat(object):
    def __init__(self, hungry=False):
        self.hungry = hungry

    def should_we_feed_the_cat(self):
        if self.hungry:
            print("Yes")
        else:
           print("No")
This is a simple class of a cat, which has an attribute hungry. The __init__() method is automatically called whenever a new class object is created, while the should_we_feed_the_cat() method is a property of the object and can be called anytime. This method tests the hungry attribute and prints out "Yes" or "No".

Let's initialize a cat
>>> felix = Cat(True)
>>> felix.should_we_feed_the_cat()
Yes
Here we initialized the object with 1 argument, even though the __init__() method is defined with two arguments. We also test whether the cat is hungry using the should_we_feed_the_cat() function without any argument, even though it is defined with one. So why is Python not complaining about this?

First, we should clarify the difference between a function and a method. A method is a function which is associated with an object. So all the functions we defined within our class above are associated with an object of that class and hence they are methods. However, we can still call methods as function
>>> type(Cat.should_we_feed_the_cat)
<class 'function'>
>>> type(felix.should_we_feed_the_cat)
<class 'method'>
The first call is a function call while the second is a method call. Class methods in Python automatically put the object itself as the first argument. So
Cat.should_we_feed_the_cat(felix)
is the same as
felix.should_we_feed_the_cat()
In the first case we call a function and need to pass in the object manually, while in the second case we call the method and the object is passed automatically.

If we get this wrong and for example call the method by passing in the object, we will get an error message which you probably have seen before (I certainly have)
>>> felix.should_we_feed_the_cat(felix)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: should_we_feed_the_cat() takes 1 positional argument but 2 
were given

If you really want to avoid the self variable you can do that by using the @staticmethod decorator
class Cat(object):
    def __init__(self, hungry=False):
        self.hungry = hungry

    @staticmethod
    def other_function():
        print("This function has no self variable")
Note that the self variable is not a reserved keyword. You could easily write the class above naming the first variable in the should_we_feed_the_cat() method anything you want and it would still work
class Cat(object):
    def __init__(whatever, hungry=False):
        whatever.hungry = hungry

def should_we_feed_the_cat(whatever):
        if whatever.hungry:
            print("Yes")
        else:
           print("No")
However, self is the accepted convention and you should stick to it.

Given the very simple concept captured by the self variable, you might already think about ways to get rid of it and use other ways to access the class attribute. If you have experience with other programming languages like Java, you might prefer to just have a pre-defined keyword. However, that does not seem to be an option in Python. If you are interested, here is a blog post by Guido Van Rossum arguing for keeping the explicit self variable.

I hope that was useful and let me know if you have questions/comments in the comments section below.
cheers
Florian

Wednesday, January 17, 2018

Lambda functions in Python

The lambda operator or lambda function is a way to create small anonymous functions, i.e. functions without a name. This construct can be useful if you need a simple function only once and you want to discard it directly after usage.

The syntax for a lambda function is
lambda arguments: expression
and they can have any number of arguments but only one expression. The expression is evaluated and returned.

Here is an example: This function squares its argument
g = lambda x: x**2
print("g(5) = ", g(5))
which gives
('g(5) = ', 25)
The same function can be defined in a conventional way
def f(x): 
   return x**2
print("f(5) = ", f(5))
which gives
('f(5) = ', 25)
Both functions g() and f() do the same thing, so lambda functions can operate like any normal function and are basically an alternative to def. Given the example above you might ask what is the purpose of a lambda function if it is just a different way of defining a function?

Lambda functions can come in handy if you do not want to define a separate function. Assume you have a list and you want to find all list elements which are even
a = [2, 3, 6, 7, 9, 10, 122]
print(filter(lambda x: x % 2 == 0, a))
which gives
[2, 6, 10, 122
filter() is a built-in Python function, which expects a function as the first argument and a list as second. The filter function will discard all list elements for which the lambda function returns False, and will keep all elements for which it returns True. Wherever a function is expected we can instead provide a lambda function and if it is a simple task like in the example above, you might prefer a lambda function rather than defining a stand-alone function using def.

Note that many programmers don't like lambda functions, but rather use list comprehension, which they deem more readable. The example above written with a list comprehension would look like this
print([num for num in a if num % 2 == 0])
You see that this looks a bit easier to read since it uses well-known concepts and it does not actually use much more space.

Maybe just to extend on this we should discuss two examples where list comprehension does not work. Let's assume we want to sort a list by the modulus of 10 of the value. We can do that with
print(sorted(a, key=lambda x: x % 10))
which results in
[10, 2, 122, 3, 6, 7, 9]
Another example is where a function can return a lambda function
def gaussian(height, center_x, center_y, width_x, width_y):
    return lambda x,y: height*np.exp(
                -(((center_x-x)/width_x)**2+((center_y-y)/width_y)**2)/2)
The function gaussian() returns a 2D Gaussian function which expects two parameters as input and returns the value of the 2D Gaussian at these two values.
func = gaussian(1., 0.5, 0.5, 1., 1.)
print(func(0.2, 0.8))
So we first create a Gaussian distribution with the center at $x=0.5$ and $y=0.5$ and than print the value of this Gaussian at $x=0.2$ and $y=0.8$.

And there are many more examples where lambda functions can come in very handy. In any case, it is important to be familiar with the concept of lambda functions, even just to be able to read other people's code.

Besides the filter function, there are also map() and reduce() which often make use of lambda functions. The map() function maps all values of a list to another list using a function like this
a = [2, 3, 6, 7, 9, 10, 122]
b = [121, 32, 61, 45, 78, 1, 90]
print(map(lambda x, y: x+y, a, b))
which gives
[123, 35, 67, 52, 87, 11, 212]
Here I provided two arguments to the lambda function. The same function can be called with any number of arguments, as long as the lists provided have the same length. 

The reduce function is a bit different since it is executed multiple times. The function which needs to be fed into reduce() has to accept two arguments. This function is then called on the first two elements of the list, then with the result of that call and the third element, and so on, until all of the list elements have been processed

from functools import reduce
a = [2, 3, 6, 7, 9, 10, 122]
print(reduce(lambda x,y: x+y, a))
which gives
159
This means that the function is called $n-1$ times if the list contains $n$ elements. The return value of the last call is the result of the reduce() function.

Lambda functions can be used anywhere, but in my case, they rarely appear outside of filter(), map() and reduce(). I hope that was useful, please leave questions/comments below.
cheers
Florian

Wednesday, January 10, 2018

Global variables in Python

Global variables in Python behave a bit differently than in other programming languages. If we define a global variable outside a function and print it inside the function, everything works fine.
x = 5

def f():
    print("x = ", x)
f()
which results in
('x = ', 5)
However, if we include a second definition of the same variable within the function, the function loses its access to the global variable
x = 5

def f():
    x = 6
    print("x = ", x)
f()
print("but the global variable has not changed... => ", x)
which gives
('x = ', 6)
('but the global variable has not changed... => ', 5)
So if you define the same variable within the function, a new local variable is created, which is lost after the function finishes. The global variable is not affected.

However, if you try to modify the global variable within the function it will result in an error
x = 5

def f():
    x += 1
    print("x = ", x)
f()
which gives
UnboundLocalError: local variable 'x' referenced before assignment
I think this is very confusing. One has access to the global variable, as shown by the print statement, but can't modify it.

To actually modify the global variable we need the global keyword like this
x = 5

def f():
    global x
    x += 1
    print("x = ", x)
f()
which gives
('x = ', 6)
In other programming languages like C++ one would not be able to declare a variable within a function if a global variable with the same name already exists. Python does not require explicit variable declarations and therefore assumes that a variable that you assign has function scope, except if you explicitly state (using the global keyword) that you want to work with global variables. However, if you want to print or access a variable, Python is using the global variable if no local variable of that name exists.

Anyway, I hope that was helpful, if you have any questions/comments, please let me know in the comment section below.
cheers
Florian

Wednesday, January 3, 2018

Plotting MCMC chains in Python using getdist

This is a quick introduction to the getdist package by Antony Lewis, which allows visualizing MCMC chains.
from getdist import plots, MCSamples
import numpy as np

def main():
    mean = [0, 0, 0]
    cov = [[1, 0, 0], [0, 100, 0], [0, 0, 8]]
    x1, x2, x3 = np.random.multivariate_normal(mean, cov, 5000).T
    s1 = np.c_[x1, x2, x3]

    mean = [0.2, 0.5, 2.]
    cov = [[0.7, 0.3, 0.1], [0.3, 10, 0.25], [0.1, 0.25, 7]]
    x1, x2, x3 = np.random.multivariate_normal(mean, cov, 5000).T
    s2 = np.c_[x1, x2, x3]

    names = ['x_1', 'x_2', 'x_3']
    samples1 = MCSamples(samples=s1, labels=names, label='sample1')
    samples2 = MCSamples(samples=s2, labels=names, label='sample2')

    g = plots.getSubplotPlotter()
    g.triangle_plot([samples1, samples2], filled=True)
    g.export('triangle_plot.png')
    g.export('triangle_plot.pdf')
    return 

if __name__ == "__main__":
    main()
which results in

We can also plot the constraints on the individual parameters
def get_constraints(samples):
    for i, mean in enumerate(samples.getMeans()):
        upper = samples.confidence(i, upper=True, limfrac=0.05)
        print("\nupper limit 95 C.L. = %f" % upper)
        lower = samples.confidence(i, upper=False, limfrac=0.05)
        print("lower limit 95 C.L. = %f" % lower)
        print("%s = %f +/- %f +/- %f" % (samples.parLabel(i),\
        mean, mean - samples.confidence(i, limfrac=0.16),\
        mean - samples.confidence(i, limfrac=0.025)) )
    return 
which in our case looks like this
upper limit 95 C.L. = 1.653261
lower limit 95 C.L. = -1.637202
x_1 = 0.000167 +/- 0.996626 +/- 1.958270

upper limit 95 C.L. = 16.365135
lower limit 95 C.L. = -16.484502
x_2 = -0.010400 +/- 9.874866 +/- 19.682727

upper limit 95 C.L. = 4.637981
lower limit 95 C.L. = -4.656869
x_3 = -0.009280 +/- 2.827152 +/- 5.571246
While in our case we set the variance and covariance of the parameters manually at the beginning to generate the samples, usually one gets an MCMC chain and wants to determine these parameters. To get the covariance matrix between two parameters we have to use the cov() function and provide the index of the parameters
print("cov(x_1, x_3) = ", samples2.cov(pars=[0,2]))
print("cov(x_1, x_2) = ", samples2.cov(pars=[0,1]))
which in our case just reproduces the input values
cov(x_1, x_3) =  [[ 0.69364079  0.10129875]
 [ 0.10129875  7.13148423]]
cov(x_1, x_2) =  [[  0.69364079   0.30451831]
 [  0.30451831  10.05805092]]
I posted the example code above on GitHub
best
Florian