1 2012-12-15 00:03:54 <Diablo-D3> MAKE THIS HAPPEN, ASSHOLES: https://petitions.whitehouse.gov/petition/define-westboro-baptist-church-hate-group-due-promoting-animosity-against-differing-cultural/xHF0d3nq?utm_source=wh.gov&utm_medium=shorturl&utm_campaign=shorturl
  2 2012-12-15 00:05:53 <gmaxwell> Diablo-D3: please exclude #bitcoin-dev from future multichannel spamming.
  3 2012-12-15 00:05:54 <gmaxwell> Diablo-D3: please exclude #bitcoin-dev from future multichannel spamming.
  4 2012-12-15 00:06:22 <Diablo-D3> gmaxwell: /amsg doesnt work that way
  5 2012-12-15 00:06:30 <Diablo-D3> xchat IS open source, however
  6 2012-12-15 00:06:39 <Diablo-D3> so I expect to see your patch merged by next release cycle
  7 2012-12-15 00:06:44 <Diablo-D3> thanks for volunteering
  8 2012-12-15 00:07:12 <pjorrit_> lol
  9 2012-12-15 00:07:21 <zeks2> :)
 10 2012-12-15 00:07:25 <sipa> haha
 11 2012-12-15 00:07:26 <sipa> haha
 12 2012-12-15 00:07:34 <wumpus> hehe
 13 2012-12-15 00:07:43 <wumpus> that was fast!
 14 2012-12-15 00:08:27 <gmaxwell> :P
 15 2012-12-15 00:09:35 <zeks2> diablo is crazy thoose days :)
 16 2012-12-15 00:10:15 <zeks2> gmaxwell since you are bitcoind developer, could you suggest some pool since i dont want only solo mining because way of getting block is almost impossible :)
 17 2012-12-15 00:12:13 <gmaxwell> zeks2: the only pool I recommend is p2pool, you'd mentioned some performance problems with it and hundreds of miners, though.
 18 2012-12-15 00:12:42 <gmaxwell> (though it's possible to run multple copies of p2pool, I run two myself, though for redundance not scaling)
 19 2012-12-15 00:13:41 <zeks2> yes, thats right, but what in case of some increasing of miners in near future, i would face same problem and should open new and new all time :) for me p2pool is great but only this is problem(and you have good memory you remmeber what i asked)
 20 2012-12-15 00:13:42 <zeks2> yes, thats right, but what in case of some increasing of miners in near future, i would face same problem and should open new and new all time :) for me p2pool is great but only this is problem(and you have good memory you remmeber what i asked)
 21 2012-12-15 00:15:16 <gmaxwell> zeks2: I'm not sure I understand exactly what you're doing. Are you trying to operate a public pool?
 22 2012-12-15 00:15:48 <zeks2> maybe yes , not still sure, but would like to be ready for it
 23 2012-12-15 00:18:07 <gmaxwell> well you could potentially use the slave mode of eloipool (luke's pool software) to both run your own public pool but also participate in a larger one. I'm not sure if that feature is currently working.
 24 2012-12-15 00:20:30 <zeks2> ok, il try out something, also tell me this, under eloipool since it dont have web interface or anything, can you suggest some i think it is called "front end server side"
 25 2012-12-15 00:20:40 <zeks2> so i can watch stats, since eloipool even dont show current hashrate
 26 2012-12-15 00:25:00 <gmaxwell> Some of the software used for that on eligius may be open source??? but I'm not clear on the details. The pool software can log into the database of your choosing, so finding out stats is just a matter of querying the database.
 27 2012-12-15 00:25:01 <gmaxwell> Some of the software used for that on eligius may be open source??? but I'm not clear on the details. The pool software can log into the database of your choosing, so finding out stats is just a matter of querying the database.
 28 2012-12-15 00:27:14 <zeks2> oki doki
 29 2012-12-15 03:50:01 <D337z> Hey guys, I have a proposed change/addition to the JSON protocol for block updates to significantly decrease bandwidth and optimize propagation...
 30 2012-12-15 03:50:18 <D337z> Who wants to hear it?
 31 2012-12-15 03:50:19 <D337z> Who wants to hear it?
 32 2012-12-15 03:53:03 <gmaxwell> D337z: uh. does it start with a misunderstanding that JSON is used to relay blocks in the network?
 33 2012-12-15 03:54:30 <D337z> Well, you tell me.  I'm proposing that the messages be compressed with Gzip then HPack prior to transport so that more than one message can be sent on updating large numbers of blocks at a time.
 34 2012-12-15 03:54:59 <D337z> I'm testing the compression right now actually.
 35 2012-12-15 03:55:46 <D337z> And I'm seeing a roughly 25% decrease in data size by Gzip alone.  I imagine that using HPack should drop it even more.
 36 2012-12-15 03:56:47 <D337z> And, while CJson could be used, HPack shows more promise and less overhead.
 37 2012-12-15 03:56:48 <D337z> And, while CJson could be used, HPack shows more promise and less overhead.
 38 2012-12-15 03:57:09 <gmaxwell> D337z: ... blocks are not sent between nodes as json. They're sent the raw binary serialzed format.
 39 2012-12-15 03:57:10 <gmaxwell> D337z: ... blocks are not sent between nodes as json. They're sent the raw binary serialzed format.
 40 2012-12-15 03:57:31 <D337z> Which can be compressed
 41 2012-12-15 03:57:44 <D337z> to about 75% of their original size
 42 2012-12-15 03:58:17 <D337z> And then the blocks are updated using JSON, correct?
 43 2012-12-15 03:58:20 <gmaxwell> No, they can't. ASCII json, I'd believe that. but not the actual raw blocks.
 44 2012-12-15 03:58:21 <gmaxwell> No, they can't. ASCII json, I'd believe that. but not the actual raw blocks.
 45 2012-12-15 03:58:27 <gmaxwell> No, again, they are not.
 46 2012-12-15 03:59:32 <D337z> Well, in any case I'm compressing it and it works
 47 2012-12-15 03:59:39 <gmaxwell> You are compressing the json.
 48 2012-12-15 03:59:40 <gmaxwell> You are compressing the json.
 49 2012-12-15 03:59:53 <D337z> No, I'm compressing the block data
 50 2012-12-15 04:00:06 <D337z> To about 75% of it's original size
 51 2012-12-15 04:00:21 <gmaxwell> D337z: A block at a time?
 52 2012-12-15 04:00:29 <gmaxwell> D337z: and block data from what source?
 53 2012-12-15 04:00:57 <D337z> From the already downloaded blocks.
 54 2012-12-15 04:01:06 <gmaxwell> D337z: where are you reading the data from?
 55 2012-12-15 04:01:19 <D337z> Which means that they are at least 75% predictable
 56 2012-12-15 04:02:18 <gmaxwell> ^ (see prior question)
 57 2012-12-15 04:02:19 <gmaxwell> ^ (see prior question)
 58 2012-12-15 04:02:44 <D337z> Which data?
 59 2012-12-15 04:02:56 <D337z> The blocks or some of the resources I've used?
 60 2012-12-15 04:03:01 <gmaxwell> D337z: where are you getting the block data you're compressing. Be specific.
 61 2012-12-15 04:03:48 <D337z> It's the block data that's already been downloaded from peers.
 62 2012-12-15 04:04:02 <gmaxwell> Great. And how are you getting it to feed it to your compressor?
 63 2012-12-15 04:04:35 <D337z> Directly compressing it.
 64 2012-12-15 04:04:57 <gmaxwell> D337z: Say I wanted to reproduce your results. I have a synchronized node. What would I do first?
 65 2012-12-15 04:04:58 <D337z> And on-the-fly
 66 2012-12-15 04:04:58 <gmaxwell> D337z: Say I wanted to reproduce your results. I have a synchronized node. What would I do first?
 67 2012-12-15 04:06:51 <D337z> Well, there's a couple ways you can do it.  Either over VPN with packet encryption enabled, or take the block data saved in the binary and compress it directly.  Though the results will be slightly different, they should be within 5% of each other.
 68 2012-12-15 04:07:03 <gmaxwell> What are _you_ doing that you got 75% with?
 69 2012-12-15 04:07:23 <D337z> Compressing the binary of already downloaded blocks
 70 2012-12-15 04:07:38 <gmaxwell> Which binary?
 71 2012-12-15 04:07:39 <gmaxwell> Which binary?
 72 2012-12-15 04:08:11 <D337z> In the bitcoin folder of appdata/Roaming
 73 2012-12-15 04:08:24 <D337z> The 2+ GB one
 74 2012-12-15 04:08:25 <D337z> The 2+ GB one
 75 2012-12-15 04:08:25 <gmaxwell> (compressing a file of many blocks is not, sadly, the same as compressing a single block at a time, alas)
 76 2012-12-15 04:08:46 <D337z> I know.  But it started out at 69% and has only changed mildly since
 77 2012-12-15 04:09:02 <gmaxwell> D337z: there are at least two 2GB ones. But okay, one of the blk0001.dat files.
 78 2012-12-15 04:09:23 <gmaxwell> What were you saying about CJson and HPack then?
 79 2012-12-15 04:11:21 <D337z> The JSON messages could be further compressed using them.  But I may have misunderstood that that payload of blocks were sent as JSON messages due to the use of the Stratum protocol.  Which means they could be further compressed.
 80 2012-12-15 04:11:22 <D337z> The JSON messages could be further compressed using them.  But I may have misunderstood that that payload of blocks were sent as JSON messages due to the use of the Stratum protocol.  Which means they could be further compressed.
 81 2012-12-15 04:11:36 <D337z> Though, I'm more on the networking side of things.
 82 2012-12-15 04:11:37 <D337z> Though, I'm more on the networking side of things.
 83 2012-12-15 04:11:41 <gmaxwell> In any case, the seralized blockchain compreses, with no seperation of blocks, by 25% with gzip??? e.g. to 75% of its original size. Which is considerably less than you're reporting.
 84 2012-12-15 04:12:10 <gmaxwell> Stratum is not part of bitcoin at all. It's a third party protocol used by some mining pools to send block headers.
 85 2012-12-15 04:12:11 <gmaxwell> Stratum is not part of bitcoin at all. It's a third party protocol used by some mining pools to send block headers.
 86 2012-12-15 04:12:40 <D337z> Ah, then Ufasoft misunderstood what I was trying to tell them.  : /
 87 2012-12-15 04:13:44 <D337z> They thought I was talking about the mining protocol.  But I was talking about updating the already existing blocks and using compression to decrease the required amount of data that needed to be sent and received.
 88 2012-12-15 04:14:56 <gmaxwell> most of the data in an actual block is signatures which are pure entropy.  The second biggest chunk are public keys and the third are hashes of public keys. None of these are natively compressable, but the keys and key hashes do get repeated, so they can provide some compression, though its greatly limited if you must work a block at a time.
 89 2012-12-15 04:15:29 <Luke-Jr> D337z: GBT supports gzip compression
 90 2012-12-15 04:15:35 <gmaxwell> The overall average bitcoin data rate is only about 14kbit/sec maximum... most of the slowness in relaying comes from the computational and disk IO costs of valdating blocks.
 91 2012-12-15 04:16:14 <gmaxwell> But I'm still very curious how you're getting 75% compression on block files... that sounds far in excess of what I thought possible with a general purpose compressor.
 92 2012-12-15 04:16:45 <gmaxwell> (I thought xz on the whole files got closer to 50% but 75% is an enormous increase beyond that!)
 93 2012-12-15 04:16:46 <gmaxwell> (I thought xz on the whole files got closer to 50% but 75% is an enormous increase beyond that!)
 94 2012-12-15 04:17:29 <D337z> Partially, I suppose, because a lot of the block can be translated to repeated 0's for about half of it.
 95 2012-12-15 04:17:46 <gmaxwell> ...
 96 2012-12-15 04:17:54 <D337z> Hold on, 75% of it's original size
 97 2012-12-15 04:18:03 <D337z> So a decrease of 25%
 98 2012-12-15 04:18:18 <D337z> So 50% would be better
 99 2012-12-15 04:18:25 <gmaxwell> oh well thats boring and well known. And doesn't support "75% predictable"!
100 2012-12-15 04:19:14 <D337z> Sorry, I got my sides of 100% confused.  >_<  I meant 25% predictable.  I have a cold.
101 2012-12-15 04:19:25 <D337z> Math is being clogged
102 2012-12-15 04:19:37 <gmaxwell> there isn't "0's for about half of it"  ... the only repeated zeros are in the prev block part of the header. but there are only six of them there.
103 2012-12-15 04:19:38 <gmaxwell> there isn't "0's for about half of it"  ... the only repeated zeros are in the prev block part of the header. but there are only six of them there.
104 2012-12-15 04:22:32 <D337z> And that's at the end, right?
105 2012-12-15 04:22:49 <D337z> What's the numeric data that follows it?
106 2012-12-15 04:23:22 <D337z> I think it's the 256-bit hash
107 2012-12-15 04:26:45 <D337z> Eh, oh well...I'm just thinking that the required data to be sent and received could be decreased.  I have a 10Mb connection and it's still hell trying to download even a day's worth of blocks if I've been offline.
108 2012-12-15 04:27:05 <D337z> Some countries don't have that great of a connection
109 2012-12-15 04:28:11 <D337z> So, JSON data can be compressed if need be wherever it's used and the block data can be compressed to around 75% of it's original size (as tested on serialized blocks which should give an indication of overall efficiency).
110 2012-12-15 04:28:44 <D337z> I managed to compress the first set of blocks to 76% of its original size.
111 2012-12-15 04:29:04 <D337z> From 1.95 to 1.48
112 2012-12-15 04:29:10 <D337z> GB
113 2012-12-15 04:30:53 <ciphermonk> I think we're CPU-bound for transaction verification rather than network-IO bound for downloading blocks
114 2012-12-15 04:30:54 <ciphermonk> I think we're CPU-bound for transaction verification rather than network-IO bound for downloading blocks
115 2012-12-15 04:31:34 <ciphermonk> You could go waaaay faster by disabling transaction verification all together but I don't think it's advisable
116 2012-12-15 04:31:36 <D337z> And, since the blocks don't seem to be sent using the JSON messages, whatever protocol is used will still be able to transport the data 25% faster excluding its own header.
117 2012-12-15 04:31:37 <D337z> And, since the blocks don't seem to be sent using the JSON messages, whatever protocol is used will still be able to transport the data 25% faster excluding its own header.
118 2012-12-15 04:33:25 <D337z> Well, it's still downloading a DVD's worth of data.
119 2012-12-15 04:35:53 <D337z> So, what transport protocol IS used to send blocks from peer-to-peer?
120 2012-12-15 04:36:07 <D337z> If not JSON
121 2012-12-15 04:36:08 <D337z> If not JSON
122 2012-12-15 04:37:54 <D337z> Just regular TCP?
123 2012-12-15 04:42:29 <paybitcoin1> https://en.bitcoin.it/wiki/Protocol_specification
124 2012-12-15 04:44:27 <D337z> Seems that it used to use a protocol similar to torrent but changed to its own p2p protocol
125 2012-12-15 04:50:42 <D337z> Well, in any case, for 8 peers to be sending blocks and for blocks to be received rather slowly, it seems something could be done to make it more universally friendly.  *shrugs*  I'm just trying to make a suggestion in the right direction.
126 2012-12-15 04:51:03 <D337z> And GBT seems to only compress for miners.
127 2012-12-15 04:51:44 <D337z> I might have read that wrong though.  Eh, I'll take another swing at it when I'm over this darn cold.  T_T
128 2012-12-15 04:51:45 <D337z> I might have read that wrong though.  Eh, I'll take another swing at it when I'm over this darn cold.  T_T
129 2012-12-15 06:01:16 <Luke-Jr> ???
130 2012-12-15 10:08:03 <MC1984> seems like this client survives suspend to ram
131 2012-12-15 10:16:15 <MC1984> whats the likelihood of gitting a bad peer on testnet rendering benchmarking useless
132 2012-12-15 10:16:27 <MC1984> is theres something like bootstrap.dat for testnet?
133 2012-12-15 10:19:13 <winemaker> could someone "farm" lots of transaction fee's when he makes a special client that trys to hold a connection to a very big nu
134 2012-12-15 10:19:17 <winemaker> mber of other nodes
135 2012-12-15 10:19:37 <oinooob> Copiing from #bitcoin: I've been thinking about the (horrible) time to get a local chain up-to-date, especially when the machine hasn't got the 3-4GB RAM required to cache current chain, and the data is on spindle disk. Has anyone considered applying ARC (Adaptive Replacement Cache)?
136 2012-12-15 10:19:54 <oinooob> I also have some doubts about current index. Granted, I haven't checked the code, but the idx file is larger than RAM on a box currently. Has it been considered to split it into 2 level of index? f.ex. "top-level" only indexing first 32 bits, or something like that?
137 2012-12-15 10:21:36 <oinooob> (when I didn't know about -datadir, I tried to softlink (junction) on Windows the dir, and the client reliably anbd always crashed).
138 2012-12-15 10:24:28 <oinooob> winemaker: Isn't all block verifications sent out to everyone, and first correct responder wins, or were you talking about a potential protocol modification?
139 2012-12-15 10:29:50 <winemaker> oinoob: i'm not very familiar with the protocol --i`m trying to understand the code and protocols for a few days now-- but if you hold a connection to the computers who are in this transaction your chance to veryfi is higher than if you are not connected.??
140 2012-12-15 10:36:49 <oinooob> Again, I don't know how the protocol works, but if it only uses a limited amount of peer connections, and you by chance happened to control all of them peers; yes, obviously, as no one else could verify. But then you'd also have to have the computing power to solve the blocks (very fast). As the target is around 10 minutes, I'd say chances are low you manage to hold all connections during this time, and even if you did you'd st
141 2012-12-15 10:38:59 <oinooob> Perhaps I misunderstand how transaction fee's are "claimed" vs. how blocks are?
142 2012-12-15 13:09:46 <Eremes> guys can u put ascii code or maybe chinese/korean character when password-ing your wallet ?
143 2012-12-15 13:10:32 <sipa> have you tried?
144 2012-12-15 13:12:27 <Eremes> just did