tag:blogger.com,1999:blog-5336273.post7696797854822487004..comments2019-01-30T20:19:23.041+09:00Comments on A++ [Eric Torreborre's Blog]: Algorithmic panicEricnoreply@blogger.comBlogger13125tag:blogger.com,1999:blog-5336273.post-7902272768234279312008-05-09T11:02:00.000+09:002008-05-09T11:02:00.000+09:00Hi all, I've posted a follow-up to that post which...Hi all, <BR/><BR/>I've posted <A HREF="http://etorreborre.blogspot.com/2008/05/algorithmic-panic-take-2.html" REL="nofollow">a follow-up</A> to that post which should be both more correct and faster.<BR/><BR/>Thanks again for your comments.<BR/><BR/>Eric.Erichttps://www.blogger.com/profile/16484514586929815703noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-59417008886337319502008-05-07T05:59:00.000+09:002008-05-07T05:59:00.000+09:00Others have already pointed out complexity issues ...Others have already pointed out complexity issues with your median algo, and I think they're right. Two more notes: 1) the takeBetween implementation as given is O(n), whether the list is a linked one or supports O(1) random access. Using bisection in takeBetween would make that part O(log n). Someone who knows better can tell what the overall time complexity would be then. O((log n)^2) or something better?<BR/><BR/>2) a naive implementation using arrays would still suck. To really get to O(log n) one must not allocate any new<BR/>arrays when discarding elements. Instead it's better to keep the input arrays intact and just play with indices.<BR/><BR/>However, I don't think the complexity matters that much as the answer is wrong anyway :)<BR/>Take e.g. list1 = [1,2,3,4,5,6,7] and list2 = [0,8,9]. The correct answer would be 4.5, or 4 if one cheats a bit. Your code gives result 7, if I read it correctly and Scala starts list indexing at zero.<BR/><BR/>The problem is that "the median of both lists is something like the median of the 2 sublists of all elements between median1 and median2" is true only if the two sublists have equal lengths. The second commenter's solution seems to be closer to right, and the one linked by "faden" is even more right: when discarding (about) half of the list items on each step, one must be careful to have exactly half of the discarded values below the median, half above. Otherwise the median of the remaining values is not the same as the median of all values.Mikkohttps://www.blogger.com/profile/16578599526203223782noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-24118790125345829672008-05-04T00:35:00.000+09:002008-05-04T00:35:00.000+09:00Finally, if the "object" in the two keys problem i...Finally, if the "object" in the two keys problem is a boolean, you could use two Bloom filters (one for each key) and do it in constant time <I>and space(!)</I> (if you don't mind a very small number of false positives - and it's quite possible in some cases to prove there can never be a false positive on a Bloom filter of large enough size).Tomhttps://www.blogger.com/profile/07009479916609344352noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-7306574376818017562008-05-04T00:27:00.000+09:002008-05-04T00:27:00.000+09:00Minor editing error: there is no trie involved, y...Minor editing error: there is no trie involved, you simply sort the items in a list by key1, then key2 (and of course have the left and right pointers to keep the elements sorted by key2).<BR/><BR/>You can find all items with key1 or key1 and key2 with a binary search. You can find all items with key2 using the inner tree. In both cases it's log n.Tomhttps://www.blogger.com/profile/07009479916609344352noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-19708662828748671572008-05-04T00:18:00.000+09:002008-05-04T00:18:00.000+09:00Regarding the "two keys" problem, you haven't give...Regarding the "two keys" problem, you haven't given us enough information as to be able to identify the best solution.<BR/><BR/>The main missing question is the access patterns; when are items inserted into your structure and how are they accessed?<BR/><BR/>Seems like there are three possible patterns:<BR/><BR/>1. the structure is created early and then accessed a very large number of times.<BR/><BR/>2. at runtime, instances of the structure are created and then accessed.<BR/><BR/>3. reads and writes are interleaved.<BR/><BR/>There are other important questions like, "How expensive is memory?" and "How large are the underlying objects being stored?" and "How many objects are there?" and "How important is execution time?"<BR/><BR/>If the objects stored in the table are small and there are a lot of them, you're going to have significant overhead with the obvious "three hash map" solution, at least 32 and probably 64 bytes overhead per object, which is harsh if the objects are for example integers.<BR/><BR/>If you're willing to drop from O(1) to O(log n) performance (I think of log n as "almost constant" myself...), store the objects in a large trie, sorted by key1 then key2, and add to each object array offsets sorting the items by key2 in a binary tree. <BR/><BR/>You can almost certainly make the array for the key2 tree to be 4 bytes, so that's an overhead of only 8 bytes per object: plus you get the advantage of allocating one massive block and filling it linearly, rather than creating a ton of tiny memory requests as you go; in fact, you could easily put such a table in ROM if you were writing for an embedded device (making it essentially "free").Tomhttps://www.blogger.com/profile/07009479916609344352noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-11659058833089311682008-05-03T18:25:00.000+09:002008-05-03T18:25:00.000+09:00Hello,Strangely I was also puzzled about this prob...Hello,<BR/><BR/>Strangely I was also puzzled about this problem lately and I had to find something : <BR/><BR/>http://batiste.dosimple.ch/blog/2008-04-25-1/fadenhttps://www.blogger.com/profile/03730399750868740712noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-35417019280407198792008-05-03T10:03:00.000+09:002008-05-03T10:03:00.000+09:00Wow, I didn't think I would get that many comments...Wow, I didn't think I would get that many comments so soon! My next step on this was to review and instrument the code to check that its complexity is really what I expected.<BR/><BR/>I will do and post a follow-up. Thanks all!Erichttps://www.blogger.com/profile/16484514586929815703noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-44475881851440087262008-05-03T09:00:00.000+09:002008-05-03T09:00:00.000+09:00I have to agree with Sam.For examplemap = { &...I have to agree with Sam.<BR/><BR/>For example<BR/><BR/>map = {<BR/> key1=>{<BR/> key2 => 'value1',<BR/> key3 => 'value2',<BR/> },<BR/> key2=>{<BR/> key1 => 'value1',<BR/> },<BR/> key3=>{ key1 => 'value2' },<BR/>}<BR/><BR/><BR/>Then, value of (key1, key2) is<BR/>map{key1}{key2}<BR/><BR/>The value of key1 is <BR/>values map{key1}<BR/><BR/>etc.<BR/><BR/>Sorry for the perl-ish syntax.Johnnyhttps://www.blogger.com/profile/14485916520292786762noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-63634503668815636252008-05-03T08:42:00.000+09:002008-05-03T08:42:00.000+09:00I don't think operations on Lists have the run-tim...I don't think operations on Lists have the run-time properties you think they do.<BR/><BR/>For example, ExtendedList.median will run in O(N) time, since List.apply and List.size each run in O(N) time.<BR/><BR/>What you probably want is a data structure that implements RandomAccessSeq (like Array), which guarantees O(1) element access and O(1) length computation.Jorge Ortizhttps://www.blogger.com/profile/14454965475839432618noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-27685840180325262912008-05-03T07:04:00.000+09:002008-05-03T07:04:00.000+09:00Your problem is a graph problem. She should make ...Your problem is a graph problem. She should make a graph data structure and store the key as a value attached to edges. Then, (u,v) is an edge of the graph with some value, and edges(u) is all the things attached to u and the same for v.<BR/><BR/>graphs are easily implemented as two level hash tablesPeterhttps://www.blogger.com/profile/17965824121030784289noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-52540237453205193052008-05-03T06:37:00.000+09:002008-05-03T06:37:00.000+09:00This comment has been removed by the author.Samhttps://www.blogger.com/profile/17105222115016951251noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-56799026949684132642008-05-03T06:15:00.000+09:002008-05-03T06:15:00.000+09:00Close enough, but not quite. The key idea here is ...Close enough, but not quite. The key idea here is that it's safe to discard half of the elements of each list, thus cutting the total number of elements in half each step.<BR/><BR/>To get all the elements between (p1, p2), drop the n/2 elements smaller than p1 from the list containing the smallest median. Then drop the higher m/2 elements from the other list. Then repeat the process.liesenhttps://www.blogger.com/profile/02361156710674732570noreply@blogger.comtag:blogger.com,1999:blog-5336273.post-63734231858888416442008-05-03T03:23:00.000+09:002008-05-03T03:23:00.000+09:00I don't see how this algorithm could find medians ...I don't see how this algorithm could find medians in O(log n).<BR/>takeWhile(_ <= b) alone takes n/2 steps and is therefore O(n).<BR/><BR/>Cutting your problem size in half in each step does not give you O(log n), unless each step is O(1).<BR/>With each step being O(n), you should get O(n*log n) (like mergesort), although I would have to analyze your algorithm more carefully and consult the Master theorem to be sure.Alexanderhttps://www.blogger.com/profile/15703039334472393404noreply@blogger.com