I have written the following code in Python, in order to estimate the value of Pi. It is called Monte Carlo method. Obviously by increasing the number of samples the code becomes slower and I assume that the slowest part of the code is in the sampling part. 
How can I make it faster?
from __future__ import division
import numpy as np
a = 1
n = 1000000
s1 = np.random.uniform(0,a,n)
s2 = np.random.uniform(0,a,n)
ii=0
jj=0
for item in range(n):
    if ((s1[item])**2 + (s2[item])**2) < 1:
        ii = ii + 1
print float(ii*4/(n))
Do you suggest other (presumably faster) codes?
                        
The bottleneck here is actually your
forloop. Pythonforloops are relatively slow, so if you need to iterate over a million items, you can gain a lot of speed by avoiding them altogether. In this case, it's quite easy. Instead of this:do this:
This works because
numpyhas built-in support for optimized array operations. The looping occurs incinstead of python, so it's much faster. I did a quick test so you can see the difference:Here are a few examples of the kinds of operations that are possible, just so you have a sense of how this works.
Squaring an array:
Adding arrays:
Comparing arrays:
Summing boolean values:
As you probably realize, there are faster ways to estimate Pi, but I will admit that I've always admired the simplicity and effectiveness of this method.