• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Programming puzzle: highest proportion of 1s in a substring

Test mine :)
I added "extending". Does not really improve that much. Cumulative skips only small bit larger than before, but added code makes single n-pass a bit longer.
Not really worth it. "Better" is an enemy of "good"
Code:
#!/usr/bin/python3

def count(arr, r0):
  i=r0
  for i in range(r0, len(arr)):
    if arr[i] == 0:
      break
  return i-r0
def scan_array(arr,k, sum):
  best=sum
  solutions=[0]
  for i in range(k,len(arr)):
    sum+=(arr[i]-arr[i-k])
    if sum>best:
      best=sum
      solutions=[i-k+1]
    elif sum == best:
      solutions.append(i-k+1)
  ad2=0
  bs=solutions[0]
  for sol in solutions:
    n=count(arr,sol+k)
    if n>ad2:
      ad2=n
      bs=sol
  return bs, best, ad2

def scan_2k(s2,k):
  arr=[int(x) for x in s2]
  sum=0
  for i in range(k):
    sum+=arr[i]
  k2=min(2*k,len(arr))
  best_density=0.
  ones=0
  add=0
  skipped=0
  scanned=0
  pos=0
  bl=k
  for i in range(k,k2):
    if add:
      sum+=arr[i]
      skipped+=1
      add-=1
      continue
    ones=ones+1
    b=float(ones)/float(i)
    if b > best_density:
      scanned+=1
      p, ones, add = scan_array(arr,i,sum)
      ones+=add
      b=float(ones)/float(i+add)
      if(b>best_density):
        best_density=b
        pos=p
        bl=i+add
      add+=1
    else:
      skipped+=1
    sum+=arr[i]
  frm="k={} Best={} pos={} len={} skipped={}/{}"
  print(frm.format(k,best_density, pos,bl,skipped,skipped+scanned))
  print("substring", s2[pos:(pos+bl)], "\n")
str2="1011110011010110000010110001111000111010111101001010100100011101010111010101001010000011010000100001100011101000100111010111111111100000"
for i in range(1,50):
  scan_2k(str2, i)

scan_2k("011101011111111110", 13)
scan_2k("00000000000000000000000", 13)

when doing as many passes as anything that does this, there are two concerns, one is reallocation, wich absolutely must be avoided.
It's python, reallocation is a last thing you should worry about.
also, yours is written in python with no comments or documentation, duck typing, indexing into bloated types and no inlining, and no reusability, and use of floating point arithmetic.
Yours does not work yet, and does not really have comments/documentation.
Again, you are complaining about python being python.
I used it to demonstrate that algorithm is working and efficient (and it is) and because it was used before me, not because I wanted to crash real world speed records.
extend isn't correct because it directly lowers cycle count, its correct because it retains the integrity of the numerators and denominators while operating full segments and not testing every segment.
OK, I did not ask you about that.
it also prevents rounding errors at high N and low K.

python is for big, high level things.

Rounding errors for substrings k=10^15 characters long. That's 1000 terabytes of characters. or 1000 terabits of bits :)
Your 64bit integer arithmetic is limited to 4GB strings. So I think I am better than you in this regard :)
 
Sorry, was a bit stoned and zoned into a very narrow context.

In truth I still am. Now I'm studying binary multiplication and it's properties and patterns.

I don't really study much math these days and it's been giving my brain a really hard spin. I understand that anything I do will probably not be as good as what others have made, but it might end up with me designing my own multiplier circuit, and floating point approximator?

Like, this is fun stuff. I was just multiplying numbers and I started seeing patterns, just in the way binary dances given facts of a number. I feel like I did in math class when I was six and I figured out powers before my teacher could reveal how they functioned following an example meant to awe us at how big they were.

I really enjoy forcing myself to actually figure out how math works on a fundamental level, and this has introduced me to a hugely interesting problem of mathematical efficiency. And I'm not just looking up the answer, I'm actually making myself figure it out, and I'm figuring it out!

Honestly, I don't care about the density problem (ok, I'll revisit that way later), but now I want to dive deep into the insanity of binary multiplication. Congratulations, you have inspired me.

I find immense value in figuring out what others have figured out, but without them when possible.

Anyway, I'll make a post about what I learn, and it's going to be something someone already knows, but it's going to be a fun proof.

you may not care about world anything, but now I think I kind of do? Because the fact is, when you can come up with something nobody has ever told you, you can come up with something nobody has ever thought.
 
Sorry, was a bit stoned.
Damn, Now I want to know what is "i"

I want to dive deep into the insanity of binary multiplication
OK, it confirms you are still stoned :)

Ok, I did it, and I have a general constant time solution for binary multiplication. Question though, has this been done before? It would be a single instruction multiply.
 
Damn, Now I want to know what is "i"


OK, it confirms you are still stoned :)

Ok, I did it, and I have a general constant time solution for binary multiplication. Question though, has this been done before? It would be a single instruction multiply.
You did what exactly?

I made a schema for multiplying in constant time.

So, I have two binary numbers I want to multiply. For each bit, I perform one set of logical operations, and the result goes to a circuit as a truth output. Then, the truth of the second frame renders that into a completed multiplication result.
 
You did what exactly?

I made a schema for multiplying in constant time.

So, I have two binary numbers I want to multiply. For each bit, I perform one set of logical operations, and the result goes to a circuit as a truth output. Then, the truth of the second frame renders that into a completed multiplication result.

Constant time regardless of the size of the number? 16 bit, 32 bit, 64 bit, 10^10 bit long, the same time?
I find that hard to believe.
Multiplication is naturally O(N^2) problem. Though, there are algorithms which are better than O(N^2) but they work only on extremely long numbers, 10^10 digits kind of numbers
 
You did what exactly?

I made a schema for multiplying in constant time.

So, I have two binary numbers I want to multiply. For each bit, I perform one set of logical operations, and the result goes to a circuit as a truth output. Then, the truth of the second frame renders that into a completed multiplication result.

Constant time regardless of the size of the number? 16 bit, 32 bit, 10^10 bit long, the same time?

Something like that? But yes. Assuming that you had a "secret" value for at least that size N, yes. I say secret because the value is known and calculated to an arbitrary N.
 
You did what exactly?

I made a schema for multiplying in constant time.

So, I have two binary numbers I want to multiply. For each bit, I perform one set of logical operations, and the result goes to a circuit as a truth output. Then, the truth of the second frame renders that into a completed multiplication result.

Constant time regardless of the size of the number? 16 bit, 32 bit, 64 bit, 10^10 bit long, the same time?
I find that hard to believe.
Multiplication is naturally O(N^2) problem. Though, there are algorithms which are better than O(N^2) but they work only on extremely long numbers, 10^10 digits kind of numbers

So, I'll let you be the judge of it. I'll post it in a day or two once I have time to write it out better than a series of truth tables on my phone, and maybe talk to someone at the U about it.
 
Wow, that would be the greatest mathematical discovery in a long while.
multiplication in O(N^0) :) Better than even O(N)
 
Alright, after digging on whether the problem was solved, it was some time ago:

"In base 2, long multiplication reduces to a nearly trivial operation. For each '1' bit in the multiplier, shift the multiplicand by an appropriate amount, and then sum the shifted values."

That was fun, though!
 
Alright, after digging on whether the problem was solved, it was some time ago:

"In base 2, long multiplication reduces to a nearly trivial operation. For each '1' bit in the multiplier, shift the multiplicand by an appropriate amount, and
That's your ground-breaking O(1) algorithm? Damn, you ARE stoned :)
 
Alright, after digging on whether the problem was solved, it was some time ago:

"In base 2, long multiplication reduces to a nearly trivial operation. For each '1' bit in the multiplier, shift the multiplicand by an appropriate amount, and
That's your ground-breaking O(1) algorithm? Damn, you ARE stoned :)

I said it was probably already solved! But the cool.part comes in that you can do everything at the same time. Every output for every bit can be kept independent for any integr until the "summation, and thus all calculated at the same time.

Let's imagine a processor with an arbitrarily large number of registers. At least 34, each 64 bits wide for N = 32.

Each bit is connected to output, shifted, to a separate register. Then that register is hit with a series of fixed operations on XOR depending on the length: parity of a number means parity of xor of equal halves of the binary representation of that number.

Strictly speaking, this is one fixed operation for a given maximum length.
 
Alright, after digging on whether the problem was solved, it was some time ago:

"In base 2, long multiplication reduces to a nearly trivial operation. For each '1' bit in the multiplier, shift the multiplicand by an appropriate amount, and
That's your ground-breaking O(1) algorithm? Damn, you ARE stoned :)

I said it was probably already solved! But the cool.part comes in that you can do everything at the same time. Every output for every bit can be kept independent for any integr until the "summation, and thus all calculated at the same time.

Let's imagine a processor with an arbitrarily large number of registers. At least 34, each 64 bits wide for N = 32.

Each bit is connected to output, shifted, to a separate register. Then that register is hit with a series of fixed operations on XOR depending on the length: parity of a number means parity of xor of equal halves of the binary representation of that number.

Strictly speaking, this is one fixed operation for a given maximum length.
Nope, you are still wrong. You can't do EVERYTHING at the same time.
You can't even do addition with zero delay. That's why it takes a fair bit of silicon to do these things in one tact.
 
I said it was probably already solved! But the cool.part comes in that you can do everything at the same time. Every output for every bit can be kept independent for any integr until the "summation, and thus all calculated at the same time.

Let's imagine a processor with an arbitrarily large number of registers. At least 34, each 64 bits wide for N = 32.

Each bit is connected to output, shifted, to a separate register. Then that register is hit with a series of fixed operations on XOR depending on the length: parity of a number means parity of xor of equal halves of the binary representation of that number.

Strictly speaking, this is one fixed operation for a given maximum length.
Nope, you are still wrong. You can't do EVERYTHING at the same time.
You can't even do addition with zero delay. That's why it take a fair bit of silicon to do these things in one tact.

There is no addition., Only parity. The operation on the numbers is a fixed calculation.
 
There is no addition., Only parity. The operation on the numbers is a fixed calculation.
Multiplication involves addition. And yes, if you can make cheap ADDer with zero delay (delay does not depend on length of the numbers) you would be a billionaire.
Of course, you have to be stoned to think it's possible.
 
There is no addition., Only parity. The operation on the numbers is a fixed calculation.
Multiplication involves addition. And yes, if you can make cheap ADDer with zero delay (delay does not depend on length of the numbers) you would be a billionaire.
Of course, you have to be stoned to think it's possible.

You are moving goalposts. I'm not saying without delay, just in constant time. In one frame all adders produce shifted bits to all output registers, and all additions happen in a fixed number of operations. That is constant time. It is potentially TRIVIALLY so, except for the fact that all results are rendered here in two-step, first for shifts and then for additions.
 
There is no addition., Only parity. The operation on the numbers is a fixed calculation.
Multiplication involves addition. And yes, if you can make cheap ADDer with zero delay (delay does not depend on length of the numbers) you would be a billionaire.
Of course, you have to be stoned to think it's possible.

You are moving goalposts. I'm not saying without delay, just in constant time. In one frame all adders produce shifted bits to all output registers, and all additions happen in a fixed number of operations. That is constant time. It is potentially TRIVIALLY so, except for the fact that all results are rendered here in two-step, first for shifts and then for additions.
I am not moving anything, Constant delay is the same thing as zero delay or rather minimum delay.
Addition involves "carry out" bits and that creates delays which can be reduced only by increasing complexity of the logic which in the end increase power use and cost. Programmers who don't know the basics of ALU sadden me.
 
You are moving goalposts. I'm not saying without delay, just in constant time. In one frame all adders produce shifted bits to all output registers, and all additions happen in a fixed number of operations. That is constant time. It is potentially TRIVIALLY so, except for the fact that all results are rendered here in two-step, first for shifts and then for additions.
I am not moving anything, Constant delay is the same thing as zero delay or rather minimum delay.
Addition involves "carry out" bits and that creates delays which can be reduced only by increasing complexity of the logic which in the end increase power use. Programmers who don't know the basics of ALU sadden me.

We aren't talking time delay. We are talking about a math problem.

Namely one that says for any multiplication of two integers, there is an operation in constant time that can solve it, and all for any other combination with a smaller result. (Technically, it can solve for any combination with an equal sum total magnitude, or 1111*1 represents the same bit depth as 11*11).

And I'm still wrong. It's O(N). though a trivial solution at l N seeing as the computational complexity is still all there, just parallel.
 
Back
Top Bottom