**EDIT****:** Nevermind, someone gave me an answer when I thought it was kinda clear I wanted to solve it myself, was just offering it as a neat puzzle (at least to me) to solve. I guess I should be more explicit, or not be stupid enough to post it before I’ve solved it.

So I have a logic puzzle, which happens to also be a programming puzzle. You have some incomplete data where you have the name of a value, a whole number version of the value which omits the decimal places, and what percent that value of the highest value. Example data set:

```
A, 38, 100%
B, 32, 86.2672%
C, 18, 49.4411%
D, 15, 39.4548%
E, 14, 38.2085%
```

I want to discover what the actual value is, with the decimal. I don’t have the exact values for this set as I’ve not solved this puzzle. I’ll actually have about 20 pieces of data in the series, but the second number will grow over time to 1000s in some cases. So I want to derive a way to do this using all data to get the most accurate answer as possible, understanding that as the number grows, the fidelity will eventually approach not being able to get any decimal places.

I wanted to pose this question to see if other folks would be interested in also taking a stab at solving it. Whether for personal fun or to share solutions with others who tried to solve it or maybe just to read the results. This is not anything I’m trying to make money off of and isn’t for work. I will provide a few full data sets to folks if they want to take a stab. There are some things in the data sets that will allow us to verify the accuracy of the solutions. Maybe folks can even blog about how they approach it? Folks interested?

Comment below or tweet at me to let me know. I’ll be preparing the data tomorrow anyway, so this is mostly about whether I set it up for puzzle consumption or not.

### Like this:

Like Loading...

Are the whole numbers truncated or rounded? How about the percentages?

Either way, it’s easy to solve. Each data value gives you an upper and lower bound for the highest value H. Take the max of the upper bounds and the min of the lower bounds and those are the overall bounds for H.

E.g., assuming truncation, then in this example the first line means 38 <= H < 39. The second line means 32 <= H/0.862672 and H/0.862673 < 33 (see how I adjust the final digit of the percentage to account for the uncertainty of the percentage). Etc.

In this case I get a lower bound of 38.018187901091885 and an upper bound of 38.253196750101139.

Guess it wasn’t clear I wanted to solve myself and was just posing it as interesting for others. Didn’t realize it was simple.

This seems… straightforward to me? Each line specifies a range for A, as follows:

32 < B < 33

86.2672 < 100B/A < 86.2673

so 100*32/86.2673 < A < 100*33/86.2672

(numbers are slightly different if decimals are rounded instead of truncated)

You could calculate this for every line and take the intersection of all the resulting intervals (in linear time), but it doesn't gain you a lot of precision. For the example data, the resulting interval of approximately (38.02, 38.25) doesn't even narrow down the tenths digit.