• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Monte Carlo Method

steve_bank

Diabetic retinopathy and poor eyesight. Typos ...
Joined
Nov 9, 2017
Messages
13,834
Location
seattle
Basic Beliefs
secular-skeptic
https://en.wikipedia.org/wiki/Monte_Carlo_method


Monte Carlo methods are common. To test a system create random vectors for variables and evaluate point by point. It is more efficient than doing a complete factorials test. Done manually you have to work out the worse case limits of each variable and test for all combinations.

Electrical simulation tools have it built in.

It us used in Queuing Theory to simulate time of arrivals.


The first example is a simple series circuit of two resistors attached to a power supply. The voltage across the low side resistor is Vin * (r2/( r1 +r2)).

All components have variation and sensitivities to the environment. The resistors are varied for temperature along with tolerance.

clear

function [perc,sor] = cums(x)
//returns a sorted data set
// and estimate of cumulative distribution
[n,m] = size(x)
sor = gsort(x,"g","i")
for i = 1:n perc(i) = (i/n) * 100;end;
endfunction

function [_avg,_std,mi,ma] = avg_std(x)
// retruns standard deviation, average,
//min, max of a data set
_sum = 0
[n,m] = size(x)
for i = 1:n _sum = _sum + x(i);end;
_avg = _sum/n
_sum = 0
for i = 1: n _sum = _sum + (_avg - x(i))^2;end
_std = sqrt(_sum/n)
ma = max(x)
mi = min(x)
endfunction

tempco = .001 // delta resitance per degrees c
n = 1000
r1 = grand(n, 1,"nor",1000,10)
r2 = grand(n, 1,"nor",1000,10)
temp = grand(n, 1,"nor",27,10) // 27c nominal
temp
vin = grand(n, 1,"nor",10,0.5)
// if > 27c increase resistance, < 27c decrease
for i = 1:n
if(temp(i) < 27) then
dr = r1(i) * tempco * temp(i)
r1(i)= r1(i) - dr
dr = r2(i) * tempco * temp(i)
r2(i) = r2(i) - dr
end
if(temp(i) > 27) then
dr = r1(i) * tempco * temp(i)
r1(i)= r1(i) + dr
dr = r2(i) * tempco * temp(i)
r2(i)= r2(i) + dr
end
end

for i = 1:n
vout(i) = vin(i) * r2(i)/(r1(i) + r2(i))
end

[pc,srt] = cums(vout)
[av,sd,lo,hi] = avg_std(vout)
[avr,sdr,lor,hir] = avg_std(r1)
disp(avr,sdr,lor,hir)

w1 = scf(1)
clf(w1)
plot(srt,pc)
w2 = scf(2)
clf(w2)
histplot(100,vout,normalization = %f)
w3 = scf(3)
clf(w3)
plot(temp)
 
fig2.png

Ouput voltage distribution over component variation and temperature.
 
I got this from the link, Monte Carlo integration to calculate PI. Fill a 1x1 square with uniformly distributed random points. Count the points inside the inscribed circle radius 1 and the area is the ratio of points in the 1/4 circle to the total number of points.

At 100,000 points the accuracy is 3.14 +- .01 on repeated runs. Not very good..

A = PI*r^2
PI = A for r = 1.


// fill a 1/1 square with points
// ratio points in an out of r = 1 for area.
clear
n = 100000
hi = 1
x = grand(n,1,"unf",0,hi)
y = grand(n,1,"unf",0,hi)
countin = 0
countout = 0
ar = 0

for i = 1:n
a(i) = (x(i)^2 + y(i)^2)
if (a(i) < 1.0) then
ar = ar + a(i)
countin = countin + 1
else countout = countout + 1
end
end
format(8)
_pi = 4 * countin/n
disp(_pi,%pi)
w1 = scf(1)
clf(w1)
plot2d(x,y,-4);
 
xy.png

Scatter ploto f random points in a square.
 
Monte Carlo simulations are nice ... and fun ... and especially popular with today's high-speed computers. But often it's easy to calculate an answer (or statistical formula) directly.

In the case of resistor values, beware! How are resistors produced and labeled? If a manufacturer is placing its resistors into either a ±5% bin or a ±10% bin, you might be able to assume that EVERY resistor labeled ±10% has an error of at least 4% ! :)


I got this from the link, Monte Carlo integration to calculate PI.
At 100,000 points the accuracy is 3.14 +- .01 on repeated runs. Not very good..

This accuracy is easy to calculate; use a formula like stddev = 4 * sqrt(s * (1-s) / 100000)
Here, 100000 is the number of trials and s is the probability of a choice between {0, 1}; here s = 3.14159 / 4.
I've multiplied by 4 because your procedure will multiply the coin-toss count by 4.
Here stddev evaluates to 0.00519. Multiplying this by 1.96 — as is customary when computing "95% error bars" — gives ±0.0101 as Steve writes.

You will have to perform 100 times as many trials to gain one additional digit of accuracy, or 10,000 times as many trials for two digits.
 
Back
Top Bottom