• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Math Quiz Thread

(Me on how many splits of some quantity into various numbers of coins with various sizes)
Can you post your code?
Code:
(* Data format for each coin split: {# each kind of coins, leftover \
#} *)

(* Input: coin split for previous coins, current coin size. Output: \
list of coin splits for this one also. *)

coinsplitgen[prev_, csize_] := Module[{pvals, rmdr, ncmax, nc},
  pvals = Most[prev];
  rmdr = Last[prev];
  ncmax = Floor[rmdr/csize];
  Table[Join[pvals, {nc, rmdr - nc*csize}], {nc, 0, ncmax}]
  ]

(* Input: coin split for previous coins, current coin size. Output: \
coin split for this one also. *)

coinsplitmost[prev_, csize_] := Module[{pvals, rmdr, nc},
  pvals = Most[prev];
  rmdr = Last[prev];
  nc = Floor[rmdr/csize];
  Join[pvals, {nc, rmdr - nc*csize}]
  ]

(* List of previous coin splits, coin size \[Rule] list of coin \
splits *)

coinsplitgenlist[prevlist_, csize_] := 
 Flatten[coinsplitgen[#, csize] & /@ prevlist, 1]

coinsplitmostlist[prevlist_, csize_] := 
 coinsplitmost[#, csize] & /@ prevlist

(* All coin splits for some starting amount and some coin sizes *)

coinsplitset[amt_, csizes_] := 
 coinsplitmostlist[Fold[coinsplitgenlist, {{amt}}, Most[csizes]], 
  Last[csizes]]

Here are some runs:
Code:
Length[coinsplitset[100, {25, 10, 5, 1}]]
242

Length[coinsplitset[100, {50, 25, 10, 5, 2, 1}]]
3953

Length[coinsplitset[240, {24, 12, 6, 3, 1}]]
38841

Length[coinsplitset[100 - 1, {25, 10, 5, 1}]]
213

Length[coinsplitset[100 - 1, {50, 25, 10, 5, 2, 1}]]
3794

Length[coinsplitset[240 - 1, {24, 12, 6, 3, 1}]]
37070
 
I wrote the C++ program. It does a brute-force search; it checks every possible value. I ran it on my iMac, which has a 2-GHz Intel Core 2 Duo CPU chip.

Code:
Total quantity: 99
Number of coin types: 2
Total number of coins: 900
Average number of coins: 9.09091
Coin sets
10 1 
11 1 

real	0m0.007s
user	0m0.001s
sys	0m0.003s

Total quantity: 99
Number of coin types: 3
Total number of coins: 526
Average number of coins: 5.31313
Coin sets
22 5 1 
23 5 1 

real	0m0.148s
user	0m0.015s
sys	0m0.004s

Total quantity: 99
Number of coin types: 4
Total number of coins: 410
Average number of coins: 4.14141
Coin sets
37 11 3 1 
38 11 3 1 

real	0m0.685s
user	0m0.593s
sys	0m0.004s

Total quantity: 99
Number of coin types: 5
Total number of coins: 346
Average number of coins: 3.49495
Coin sets
40 16 7 3 1 
41 16 7 3 1 
44 18 7 3 1 
44 20 8 3 1 
45 18 7 3 1 
45 20 8 3 1 

real	0m18.646s
user	0m17.122s
sys	0m0.047s
 
I wrote the C++ program. It does a brute-force search; it checks every possible value. I ran it on my iMac, which has a 2-GHz Intel Core 2 Duo CPU chip.

Code:
Total quantity: 99
Number of coin types: 2
Total number of coins: 900
Average number of coins: 9.09091
Coin sets
10 1 
11 1 

real    0m0.007s
user    0m0.001s
sys    0m0.003s

Total quantity: 99
Number of coin types: 3
Total number of coins: 526
Average number of coins: 5.31313
Coin sets
22 5 1 
23 5 1 

real    0m0.148s
user    0m0.015s
sys    0m0.004s

Total quantity: 99
Number of coin types: 4
Total number of coins: 410
Average number of coins: 4.14141
Coin sets
37 11 3 1 
38 11 3 1 

real    0m0.685s
user    0m0.593s
sys    0m0.004s

Total quantity: 99
Number of coin types: 5
Total number of coins: 346
Average number of coins: 3.49495
Coin sets
40 16 7 3 1 
41 16 7 3 1 
44 18 7 3 1 
44 20 8 3 1 
45 18 7 3 1 
45 20 8 3 1 

real    0m18.646s
user    0m17.122s
sys    0m0.047s

You are using the greedy change-making algorithm. This is optimal for the {1,5,10,25} denominations (and others) but not for all, i.e. making change for 36c from {1,5,18,25} would optimally be two 18c coins, but greedily would be a 25c plus two 5c plus a 1c.

Try DP instead.
 
You are using the greedy change-making algorithm. This is optimal for the {1,5,10,25} denominations (and others) but not for all, i.e. making change for 36c from {1,5,18,25} would optimally be two 18c coins, but greedily would be a 25c plus two 5c plus a 1c.

Try DP instead.
What's DP?

The only thing I can think of at the moment is to go through all possible combinations of coins that fit, and then find the best one.
 
You are using the greedy change-making algorithm. This is optimal for the {1,5,10,25} denominations (and others) but not for all, i.e. making change for 36c from {1,5,18,25} would optimally be two 18c coins, but greedily would be a 25c plus two 5c plus a 1c.

Try DP instead.
What's DP?

The only thing I can think of at the moment is to go through all possible combinations of coins that fit, and then find the best one.

Dynamic programming. Since the problem satisfies an order property, solutions to smaller problems may be used to solve larger problems. You basically keep a table of solutions of smaller values and work your way up, checking the relevant smaller solutions to determine the solution with the fewest coins.

Edit: Here's a write-up in more detail.
 
Did you look at the second link?
There's still the problem of there being irrelevant variables in the equation.

The second investment yielded $200 more profit than the first investment, but the second investment took seven months longer than the first. Which was the better investment? If we do not consider time, the second investment is better. If we consider time, the first investment is better. If we consider both profit and time, then the first investment is better because even though it was less profitable (dollar-wise), it took considerable more time to make the additional profit. My quick and crude calculations tell me that the second car would have been sold about no more than 18 days later than the first for both investments to be equally attractive when considering both time and money. The per day average profit gradually declines with each passing day.

I'm not sure I want you to be the priest's mathematician for calculating end of times.
 
A basic optimizer is simple. Excel has one.

y1=f(a,b,c)
y2 = f(a,b,c)

y1 target = the interval y1 min to y1 max

y2 target = the interval y2 min to y2 max

a = a min to a max interval a-step
b = b min to b max interval b-step
c = c min to c max interval c-step

Iterate for all combination of the a,b,c vectors and test for y1,y2 bounds.

Mote Carlo techniques can speed up a solution. The a,b,c vectors are randomized resulting in average lower run times..

Adaptive or dynamic algorithms dynamically adjust the step sizes and/or the y1,y2 bounds based on a set of rules or functions. The problem with dynamic algorithms is that it forms a feedback loop and can be unstable. Similar problem for predictor-corrector algorithms for solving differential equations.

The Newton-Raphson method of polynomial root finding is a dynamic optimizer of sorts with the attendant problems of stability and convergence. As the algorithm closes in on a solution it can go into an overshoot-undershoot loop, an infinite oscillation.

I worked in a pressure sensor group that made a spacial high accuracy device over a wide temperature range. There were three calibration resistors. The sensor was characterized over pressure and temperature. The measured data was plugged into an optimizer that predicted the resistor values that optimized a set of non linear equations. The guy who developed the app used the Excel function and VB.
 
'...Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine all possible ways to solve the problem and will pick the best solution. Therefore, we can roughly think of dynamic programming as an intelligent, brute-force method that enables us to go through all possible solutions to pick the best one...'

Different fields give different names to the same fundamental process. Generally called trial and error supported by fast computers.

The engineering design process could be called a form of dynamic programming.

Related see linear programming.


http://en.wikipedia.org/wiki/Linear_programminghttp://en.wikipedia.org/wiki/Dynamic_programming
 
'...Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine all possible ways to solve the problem and will pick the best solution. Therefore, we can roughly think of dynamic programming as an intelligent, brute-force method that enables us to go through all possible solutions to pick the best one...'

Different fields give different names to the same fundamental process. Generally called trial and error supported by fast computers.

The engineering design process could be called a form of dynamic programming.

Related see linear programming.


http://en.wikipedia.org/wiki/Linear_programminghttp://en.wikipedia.org/wiki/Dynamic_programming

That Wikipedia paragraph is just plain wrong (it happens occasionally), dynamic programming is not brute force, nor does it examine all possible solutions. I will go edit it out.
 
What's DP?

Dynamic programming. Since the problem satisfies an order property, solutions to smaller problems may be used to solve larger problems. You basically keep a table of solutions of smaller values and work your way up, checking the relevant smaller solutions to determine the solution with the fewest coins.

Edit: Here's a write-up in more detail.
Thanx. I've implemented it in Mathematica.

I've found
{31, 14, 6, 1} -- 3.96 coins
{24, 15, 4, 1} -- 3.97 coins

I'll have to implement it in C++ also.
 
'...Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine all possible ways to solve the problem and will pick the best solution. Therefore, we can roughly think of dynamic programming as an intelligent, brute-force method that enables us to go through all possible solutions to pick the best one...'

Different fields give different names to the same fundamental process. Generally called trial and error supported by fast computers.

The engineering design process could be called a form of dynamic programming.

Related see linear programming.


http://en.wikipedia.org/wiki/Linear_programminghttp://en.wikipedia.org/wiki/Dynamic_programming

That Wikipedia paragraph is just plain wrong (it happens occasionally), dynamic programming is not brute force, nor does it examine all possible solutions. I will go edit it out.

It may be poorly worded, however It all depends on the problem. Depending on how you set up the problem you might evaluate all solutions or iterate until a solution is found.

'Brute Force' refers to any iterative trial and error process or iterative process that is lengthy, as opposed to a closed form analytical solution.

The integral of 2x is x^2 derived from the definition of he derivative. A brute force integration would be a Riemann Sum. An adaptive or dynamic integration algorithm might vary dx depending on different areas of the function, but it is still brute force.

None of the iterative techniques applied to complex problems are possible without computers, hence the speed of computer being 'brute force power'.

Various techniques try to minimize the number of trials to converge on a solution.

I tend to associate linear and dynamic programming with manufacturing operations. They are used to do things like find min cost solutions. In a logistics transport problem the roads and nodes are finite so the solver may iterate all solution.

The problems are generally sets of non linear equations which ca not by definition be directly solved with linear algebra. Solving non linear systems of equations is a generic problem across many areas including what is labeled dynamic programming. The area boundaries are arbitrary, the math remains the same.

The generic term is a 'non linear solver'. And the word solver is generally used with non linear implied.

The electric circuits solver is called SPICE ad it originated at Berkeley. Given a set of equations representing a circuit it iterates until Kirchhoff's voltage and current laws are satisfied as constraints within a specified error bound.

Genetic algorithms are another form of a solver.

The underlying algorithms are all generally the same, names change with particular areas of applications and specific problems.
 
That Wikipedia paragraph is just plain wrong (it happens occasionally), dynamic programming is not brute force, nor does it examine all possible solutions. I will go edit it out.

It may be poorly worded, however It all depends on the problem. Depending on how you set up the problem you might evaluate all solutions or iterate until a solution is found.

'Brute Force' refers to any iterative trial and error process or iterative process that is lengthy, as opposed to a closed form analytical solution.

The integral of 2x is x^2 derived from the definition of he derivative. A brute force integration would be a Riemann Sum. An adaptive or dynamic integration algorithm might vary dx depending on different areas of the function, but it is still brute force.

None of the iterative techniques applied to complex problems are possible without computers, hence the speed of computer being 'brute force power'.

Various techniques try to minimize the number of trials to converge on a solution.

I tend to associate linear and dynamic programming with manufacturing operations. They are used to do things like find min cost solutions. In a logistics transport problem the roads and nodes are finite so the solver may iterate all solution.

The problems are generally sets of non linear equations which ca not by definition be directly solved with linear algebra. Solving non linear systems of equations is a generic problem across many areas including what is labeled dynamic programming. The area boundaries are arbitrary, the math remains the same.

The generic term is a 'non linear solver'. And the word solver is generally used with non linear implied.

The electric circuits solver is called SPICE ad it originated at Berkeley. Given a set of equations representing a circuit it iterates until Kirchhoff's voltage and current laws are satisfied as constraints within a specified error bound.

Genetic algorithms are another form of a solver.

The underlying algorithms are all generally the same, names change with particular areas of applications and specific problems.

No, it is wrong. Everything else you wrote is either nonsense or strangely irrelevant, as usual.
 
I did that coin problem in C++
Code:
Total quantity: 99
Number of coin types: 2
Total number of coins: 900
Average number of coins: 9.09091
Coin sets
10 1 
11 1 

real	0m0.189s
user	0m0.012s
sys	0m0.005s

Total quantity: 99
Number of coin types: 3
Total number of coins: 515
Average number of coins: 5.20202
Coin sets
19 12 1 

real	0m0.739s
user	0m0.628s
sys	0m0.005s

Total quantity: 99
Number of coin types: 4
Total number of coins: 389
Average number of coins: 3.92929
Coin sets
25 18 5 1 
29 18 5 1 

real	0m27.679s
user	0m24.990s
sys	0m0.087s

Number of coin types: 5
Total number of coins: 329
Average number of coins: 3.32323
Coin sets
33 23 16 5 1 

real	15m10.547s
user	11m17.469s
sys	0m3.017s
beero1000, I got that 3.9 coins that you mentioned earlier.
 
I did that coin problem in C++
Code:
Total quantity: 99
Number of coin types: 2
Total number of coins: 900
Average number of coins: 9.09091
Coin sets
10 1 
11 1 

real    0m0.189s
user    0m0.012s
sys    0m0.005s

Total quantity: 99
Number of coin types: 3
Total number of coins: 515
Average number of coins: 5.20202
Coin sets
19 12 1 

real    0m0.739s
user    0m0.628s
sys    0m0.005s

Total quantity: 99
Number of coin types: 4
Total number of coins: 389
Average number of coins: 3.92929
Coin sets
25 18 5 1 
29 18 5 1 

real    0m27.679s
user    0m24.990s
sys    0m0.087s

Number of coin types: 5
Total number of coins: 329
Average number of coins: 3.32323
Coin sets
33 23 16 5 1 

real    15m10.547s
user    11m17.469s
sys    0m3.017s
beero1000, I got that 3.9 coins that you mentioned earlier.

What compiler are you using? My freeware MS Express will no longer start up.
 
OSX built-in gcc. I compiled it with C++ options and used optimization level 3: g++ -O3.

My freeware MS Express will no longer start up.
What's MS Express?

http://www.visualstudio.com/en-us/products/visual-studio-express-vs.aspx


The Microsoft Development tools. C++ plus gui development. More than i I, i need something like the old Borderland Turbo C.

Scilab has a C and FORTRAN interface

I have not used an Apple system since the Apple II.

Thanks.
 
Not enough linear algebra.
  1. Decompose \(\begin{bmatrix}1&1&0\\1&3&4\\1&5&8 \end{bmatrix}\) into a sum of two rank 1 matrices. Explain how, in general, a rank R matrix can be decomposed into a sum of R rank 1 matrices. Is there a 'best' decomposition?
  2. If a 3 x 3 matrix A has eigenvalues 0, 1, and 2, what are the eigenvalues of (A2+I)-1?
  3. How many different Jordan forms represent the class of 2 x 2 binary matrices?
  4. Transposition of n x n matrices is a linear mapping. If every (finite-dimensional) linear mapping has a matrix representation, why can't we say that there is a matrix A such that AM = MT for every M? What is the correct statement?
 
Decompose \(\begin{bmatrix}1&1&0\\1&3&4\\1&5&8 \end{bmatrix}\) into a sum of two rank 1 matrices. Explain how, in general, a rank R matrix can be decomposed into a sum of R rank 1 matrices. Is there a 'best' decomposition?
I'll use row rank here.

The first two rows of that matrix can be used to calculate the third one: - (first) + 2*(second). One can make a rank-1 matrix from each of those first two rows, thus giving two rank-1 matrices.
{{1, 1, 0}}
{{1, 3, 4}}
with
{{1, 5, 8}} = 2*{{1, 3, 4}} - {{1, 1, 0}}

More generally, any row of any matrix can form a rank-1 matrix. It's {row}.

If a 3 x 3 matrix A has eigenvalues 0, 1, and 2, what are the eigenvalues of (A2+I)-1?
In general, if a matrix A has an eigenvalue λ, then F(A) has a corresponding eigenvalue, f(λ).
Thus, 0, 1, 2 correspond to 1, 1/2, 1/5.

How many different Jordan forms represent the class of 2 x 2 binary matrices?
Eight:
\( \left( \begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array} \right), \left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \\ \end{array} \right), \left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \\ \end{array} \right), \left( \begin{array}{cc} 0 & 0 \\ 0 & 2 \\ \end{array} \right), \left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \\ \end{array} \right), \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right), \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \\ \end{array} \right), \left( \begin{array}{cc} \frac{1}{2} \left(1-\sqrt{5}\right) & 0 \\ 0 & \frac{1}{2} \left(1+\sqrt{5}\right) \\ \end{array} \right) \)

Found with Mathematica:
Code:
TeXForm /@ Union[JordanDecomposition[Partition[#, 2]][[2]] & /@ Tuples[{0, 1}, 4]]

Transposition of n x n matrices is a linear mapping. If every (finite-dimensional) linear mapping has a matrix representation, why can't we say that there is a matrix A such that AM = MT for every M? What is the correct statement?
You have to flatten M. Transposing is then interchanging the components of this M vector in a suitable way.

A.M = MT gives A = MT.M-1, and A can be shown to depend on the specific values in M. This is easy to show for a 2*2 matrix.
 
Back
Top Bottom