1 2015-03-17 01:46:23 <Bog4r7> It's taking ages for bitcoin-qt to reindex the chain on my Atom-based laptop. Can I just copy all of ~/.bitcoin to my fast desktop and run the same version of bitcoin-qt over there, then copy back when it's done?
2 2015-03-17 01:46:59 <phantomcircuit> Bog4r7, yes
3 2015-03-17 01:47:10 <phantomcircuit> make sure to exit bitcoin-qt before copying though
4 2015-03-17 01:47:24 <Bog4r7> Great. Thx. Yes, of course.
5 2015-03-17 03:36:26 <fanquake> ;;blocks
6 2015-03-17 03:36:27 <gribble> 347944
7 2015-03-17 07:12:10 <brendafdez> hwta happens if accidentally a transaction has a change output below the magic mimimum of 5400 satoshis or so? Will it still not be relayed by most nodes? That's something that has been buggin me quite a bit, and the limit in general.
8 2015-03-17 07:12:46 <sipa> brendafdez: yes
9 2015-03-17 07:13:13 <sipa> the limit is 3 times the relay fee based on size
10 2015-03-17 07:13:14 <brendafdez> ok, not very encouraging :/
11 2015-03-17 07:13:21 <sipa> which is more like 500 satoshi now
12 2015-03-17 07:13:26 <sipa> by default
13 2015-03-17 07:13:32 <gmaxwell> brendafdez: why? competent implementations will just convert tiny amounts like that into fees.
14 2015-03-17 07:13:42 <gmaxwell> instead of creating the worse-than-worthless change.
15 2015-03-17 07:14:32 <sipa> also, the network can't distinguish change from non-change (that's the point even!)
16 2015-03-17 07:14:41 <sipa> so yes, the same limit applies
17 2015-03-17 07:15:49 <brendafdez> yeap, maybe. Well then we should implement BIP3514: Change bit. :)
18 2015-03-17 07:16:15 <sipa> eh...
19 2015-03-17 07:17:05 <gmaxwell> brendafdez: uhh thats exactly opposite the intent there. You really _do not_ want very tiny amounts of change. That change will cost you more to spend than its worth, and cause perpetual utxo bloat as a result.
20 2015-03-17 07:17:06 <sipa> that would defeat the purpose (but i guess you're joking)
21 2015-03-17 07:27:08 <brendafdez> ok, well, really, i wouldn't think that the code should make that kind of judgements on whether certain outputs are good or bad, but I get your point. I would think that sane implementations would't care about the user creating a transaction which outputs dust. I had specifically created such a transaction and i couldn't push it 'because dust'. If the dust is then unspendable, so be it, I'm not advocating for the client to pick dust ou
22 2015-03-17 07:27:08 <brendafdez> tputs as inputs by default either, but the whole limit to relay by default bc dust doesn't fit much my views. That's why i'm sharing it here, despite the jokes and the likelihood that I'll earn the troll badge sooner or later ;)
23 2015-03-17 07:28:25 <sipa> well you're forcing a cost onto the whole network, which you don't pay for
24 2015-03-17 07:28:33 <brendafdez> Anyway, it's at least a good way to showcase 'double spends' before confirmation, bc if you get to relay one such transaction most nodes won't take it and it will take forever to confirm, if it does, so it gives you ample chance to broadcast a conflicting standard transaction that will confirm instead. I take it as an educational thinl.
25 2015-03-17 07:28:38 <sipa> for something that is not even valuable for you
26 2015-03-17 07:29:19 <brendafdez> Why wouldn't I pay for it, i'm still paying fee for the transaction and for that dust to be included in the blockchain. To me it's the same as burning bitcoin, if the user wants to, who is the code to say no...
27 2015-03-17 07:29:48 <sipa> you don't pay full nodes to relay or maintain your entry in the database
28 2015-03-17 07:30:15 <sipa> and you likely will never remove that entry, as it's uneconomic for you to spend it
29 2015-03-17 07:30:25 <sipa> the incentives are all wrong for it
30 2015-03-17 07:30:51 <sipa> your fee only goes to miners, not to full nodes that maintain or validate your entry
31 2015-03-17 07:31:13 <sipa> but you are likely forever burdening with the cost of maintaining your data
32 2015-03-17 07:32:40 <sipa> it is in full nodes' interest to not do that
33 2015-03-17 07:33:23 <gmaxwell> You can still create the output if you want, just pay more into it... effectively it's just more 'fee', but in this case it has the advantage of making the output actually economic for you to redeem, avoiding the incentive problem.
34 2015-03-17 07:34:43 <brendafdez> i couldn't push a 1 satoshi output even with 0.02 BTC fee in the transaction. But ok, I guess that if im more selfish and put that fee towards myself it's solved
35 2015-03-17 07:35:24 <sipa> brendafdez: well don't look at it from your point of view
36 2015-03-17 07:35:33 <sipa> full nodes don't get that 0.02 BC
37 2015-03-17 07:35:44 <sipa> their only incentive for running is have a useful currency
38 2015-03-17 07:36:52 <brendafdez> A useful currency is a currency which does what the user wants, so as i see it, allowing even things that are 'wrong' is in best interest of being the default behavior for nodes
39 2015-03-17 07:36:58 <gmaxwell> brendafdez: thats the _opposite_ of selfish, as it actually optimizes the system towards usability.
40 2015-03-17 07:37:40 <sipa> brendafdez: you are creating a an output which you likely will not ever spend; how is that useful for the currency?
41 2015-03-17 07:38:30 <gmaxwell> in any case, as mentioned... You can happily think of it as just a fee like any other fee. It's just one thats structured in a way that has better incentive alignment when it comes to small utxos.
42 2015-03-17 07:38:52 <sipa> and you get it back, even better :)
43 2015-03-17 07:39:39 <gmaxwell> (or, if not talking about change, the recipent gets it. Either way, it's not worse for you than paying it in fee.)
44 2015-03-17 07:40:57 <brendafdez> Yes, well, it's like keeping the pennies change when people buy form my store, bc they wouldn be able to ever spend them towards anything (much less so where i live in argentina, where a peso cent is like 0.0007 dollars). Ok,I'll tell people about the 'reasons' when i use this to showcase 'double spends' at talks :)
45 2015-03-17 08:22:52 <Adlai> brendafdez: see https://what-if.xkcd.com/22/
46 2015-03-17 08:22:55 <Adlai> same idea
47 2015-03-17 08:27:42 <sipa> Adlai: ha, i like that as an answer :)
48 2015-03-17 14:43:38 <morcos> sipa: let me ask you a question about db state during init
49 2015-03-17 14:43:49 <sipa> ok
50 2015-03-17 14:44:03 <sipa> i need to run soon, but i'll amswer when i can
51 2015-03-17 14:44:08 <morcos> if you do something that requires a reindex (and you're running qt) then you wipe your on disk databases, but you don't necessarily clear out your in memory structures
52 2015-03-17 14:44:15 <morcos> such as vinfoblockfile and nlastblockfile
53 2015-03-17 14:44:49 <morcos> we have a bug in autoprune right now which causes those to get written back to the database, when you're trying to restart unpruned, we can fix that bug
54 2015-03-17 14:44:57 <sipa> that's a bug, i guess
55 2015-03-17 14:45:06 <morcos> but it seems a bit risky to not clear out your in memory structures when you're wiping the database
56 2015-03-17 14:45:17 <sipa> there is an uninitblockdb function now
57 2015-03-17 14:45:24 <morcos> in particularl nlastblockfile is written regardless of whether anyone thinks its dirty
58 2015-03-17 14:45:28 <sipa> which could be used for that
59 2015-03-17 14:45:56 <morcos> unloadblockdb?
60 2015-03-17 14:46:15 <sipa> yes, that's probably it (on my phone now)
61 2015-03-17 14:46:22 <morcos> ha i mean unloadblockindex
62 2015-03-17 14:46:39 <morcos> ok, i'll look into it, thanks
63 2015-03-17 15:26:31 <ajweiss> sipa, cfields: do you guys have any preference as to what to do when a reindex is issued for a pruned node? (wipe all block files? attempt to use them (are the fragmentation issues worth it?) move them out of the way? delete those we didn't use?)
64 2015-03-17 15:45:49 <wumpus> ajweiss: I'd implement the obvious 'wipe it all and start over' first, as it is least bug prone and easiest to implement
65 2015-03-17 15:46:10 <wumpus> ajweiss: later on that could be refined, if it turns out that is worth it and there are proper tests
66 2015-03-17 15:48:53 <wumpus> in principle having blocks out of order is no problem, it could keep the current block files and remember about the blocks in them (not even need to move or rename anything), then re-add earlier blocks and connect old blocks as necessary. But geting that right every time sounds quite involved.
67 2015-03-17 15:50:06 <wumpus> I've avoided that issue in -reindex in 0.10 by remembering out-of-order blocks in a local static data structure; that would have to move to something persistent in the case already present blocks can be needed later outside reindexing...
68 2015-03-17 15:50:41 <wumpus> in any case in practice a pruned node would only retained a low % of the blocks, so redownloading everything isn't that big of an issue.
69 2015-03-17 15:56:41 <morcos> wumpus: should we do anything different in the non-pruned case (sort of independent of whether you were previously pruned). if you're missing a block file for any reason, and therefore your reindex stops at some point. it seems like you don't want to keep block files beyond that point
70 2015-03-17 15:57:46 <morcos> when you issue a reindex, i don't think you'd know if you were missing block files because of pruning or just because you rm'ed them
71 2015-03-17 15:58:37 <wumpus> yes, right now it stops when a block file is missing
72 2015-03-17 15:59:00 <morcos> but doesn't delete any higher number'ed files..
73 2015-03-17 15:59:19 <wumpus> (if it'd use the mechanism I described above, and keep track of all blocks persistently, also the ones that are not connected, that would not be needed)
74 2015-03-17 15:59:32 <wumpus> well it will overwrite them at some point right
75 2015-03-17 16:00:21 <wumpus> I don't think a reindex without pruning should delete anything
76 2015-03-17 16:00:58 <morcos> well when do you 'wipe it all and start over'? if you -reindex, then you don't even read the database to know whether you were previously pruned
77 2015-03-17 16:01:26 <wumpus> right. you don't even need to wipe anything
78 2015-03-17 16:01:45 <wumpus> it will start over automatically as it can't find any blocks
79 2015-03-17 16:01:50 <morcos> so only wipe if passing -pruned=X and -reindex?
80 2015-03-17 16:02:10 <wumpus> I think so.
81 2015-03-17 16:02:20 <wumpus> block deleting should only happen when pruning is active
82 2015-03-17 16:02:50 <wumpus> otherwise you're not asking it to delete anything, so it shouldn't
83 2015-03-17 16:03:17 <morcos> ok, i guess with cfields fix, that shouldn't break anything, the important thing is to delete those old block files if you're pruning again, otherwise they'll end up clogging up your pruning at some point
84 2015-03-17 16:03:55 <wumpus> yes
85 2015-03-17 16:08:49 <morcos> its still a little messy if you run, -prune, then go to -prune=0 (so have to -reindex), but now you have stale high numbered block files out there, if you go back to -prune (no reindex needed) before those get overwritten, they'll still be clogging things.. but is that worrying too much?
86 2015-03-17 16:11:16 <wumpus> I don't see that as a problem, if you're not pruning, you expect to get exactly the same number (or more) of block files again
87 2015-03-17 16:11:34 <morcos> no, you'll get less
88 2015-03-17 16:11:47 <morcos> because you'll now write without orphans
89 2015-03-17 16:11:50 <morcos> stale blocks
90 2015-03-17 16:12:00 <wumpus> that's only a very small difference
91 2015-03-17 16:12:18 <wumpus> the block chain grows fast enough in the meantime for that not to matter
92 2015-03-17 16:12:39 <morcos> well if you have a long runnign node, its easily enough to leave a couple extra blockfiles hanging out there right?
93 2015-03-17 16:12:56 <wumpus> so what? they will get overwritten in time
94 2015-03-17 16:13:53 <morcos> ok.. you can just say i'm worrying too much. :) i was just thinking they'll stop you from pruning in the meantime, and making it harder for you to meet your target, but yeah thats not the end of the world...
95 2015-03-17 16:14:54 <wumpus> as said above - ideally the reindex would notice the blocks that are already there, mark them as present but not connected in the block database, and start from there. But it doesn't need to do that initally, I think it's quite error prone to implement and I'd rather prefer a simpler mechanism for 0.11.
96 2015-03-17 16:20:20 <wumpus> -reindex without -prune deleting anything, however, doesn't hold up to the principle of least surprise
97 2015-03-17 16:21:57 <wumpus> there could be an extra option, I suppose, to delete unused block files, which defaults to true if prune is enabled, but directly linking it to prune is fine too
98 2015-03-17 16:37:20 <cfields> morcos: i agree with the above logic. as a next step, maybe teach -loadblock to deal with a group of .dat files? or maybe more intuitively, create a -reindex-from=? that would let someone move their blocks aside, prune, then reindex using the moved files to get back to unpruned
99 2015-03-17 16:39:43 <cfields> meh, nm. that'd either involve a ton of copying, or end up depending on files in the specified location
100 2015-03-17 17:28:03 <morcos> cfields: in PruneOneBlockFile, you have to now go through mapBlockIndex and update all of the block indices which referred to that file, its not a matter of finding one
101 2015-03-17 17:31:31 <cfields> morcos: i see now, thanks. i got stuck thinking in terms of files
102 2015-03-17 17:50:47 <ajweiss> is LOCK2 safe?
103 2015-03-17 17:51:31 <luke-jr_> ajweiss: ? why would it exist if not safe?
104 2015-03-17 17:51:48 <ajweiss> lots of things exist that are not safe
105 2015-03-17 17:52:56 <gavinandresen> safe how? it is as safe as LOCK....
106 2015-03-17 17:53:27 <gavinandresen> (which are both not very safe, you can deadlock if you get the lock order wrong)
107 2015-03-17 17:54:19 <ajweiss> ok, so just make sure to lock inwards
108 2015-03-17 17:54:50 <ajweiss> i was assuming i'd find something that releases the outer lock if it can't grab the inner one
109 2015-03-17 17:54:51 <gavinandresen> yes, and compile with -DDEBUG_LOCKORDER to catch mistakes before deadlocking
110 2015-03-17 18:08:07 <morcos> we're just trying to reason about what exactly should be locked, cs_LastBlockFile seems to only be locked when cs_main is locked anyway, so its hard to figure out precisely what it should be guarding. For instance should cs_LastBlockFile be locked in FlushStateToDisk as well?
111 2015-03-17 18:14:04 <sipa> ajweiss: there is TRY_LOCK
112 2015-03-17 18:14:16 <sipa> ajweiss: you can emulate with that
113 2015-03-17 18:15:06 <sipa> morcos: it likely doesn't matter, as lastblockfile is never accessed outside of main
114 2015-03-17 18:15:10 <sipa> but it may schange
115 2015-03-17 18:15:17 <sipa> if we make things mkre granular
116 2015-03-17 19:30:17 <cfields> sipa: to go along with the fortuna pr, interested in extending crypto/aes so that we can drop openssl in crypter ?
117 2015-03-17 19:57:10 <kanzure> https://github.com/bitcoin/bips/blob/2ea19daaa0380fed7a2b053fd1f488fadba28bda/bip-0032.mediawiki#private-parent-key--private-child-key
118 2015-03-17 19:57:13 <kanzure> "In case parse256(IL) ⥠n or ki = 0, the resulting key is invalid, and one should proceed with the next value for i. (Note: this has probability lower than 1 in 2^(127).)"
119 2015-03-17 19:57:16 <kanzure> why?
120 2015-03-17 20:15:45 <harding> kanzure: why what? Those are invalid values for keys on secp256k1.
121 2015-03-17 20:53:49 <sipa> kanzure: you could do modulo to reduce them to a valid number, but that would introduce a tiny bias, which ecdsa is vulnerable for
122 2015-03-17 20:54:08 <sipa> kanzure: due to the ridiculously low chance, that shouldn't matter in practice
123 2015-03-17 20:54:15 <sipa> but no need to expose it
124 2015-03-17 21:55:27 <LeMiner> 2015-03-17 21:53:33 *** System error while flushing: CDB : Error -30974, can't open database
125 2015-03-17 21:55:27 <LeMiner> 2015-03-17 21:53:37 CDBEnv::EnvShutdown : Error -30974 shutting down database environment: DB_RUNRECOVERY: Fatal error, run database recovery
126 2015-03-17 21:55:46 <LeMiner> any ideas why i get that when i close qt?
127 2015-03-17 21:56:43 <sipa> do you get it reproducibly, or just once?
128 2015-03-17 21:56:51 <LeMiner> every single time
129 2015-03-17 21:57:00 <LeMiner> when i start it up again it works like nothing happened
130 2015-03-17 21:57:15 <sipa> what filesystem/os/storage?
131 2015-03-17 21:57:28 <LeMiner> win8.1/ssd
132 2015-03-17 21:58:01 <LeMiner> ntsf
133 2015-03-17 21:58:12 <sipa> anything in db.log?
134 2015-03-17 21:58:17 <LeMiner> lemme see
135 2015-03-17 21:59:39 <LeMiner> PANIC: fatal region error detected; run recovery
136 2015-03-17 22:00:03 <LeMiner> Database environment corrupt; the wrong log files may have been removed or incompatible database files imported from another environment
137 2015-03-17 22:00:06 <sipa> are you using the wallet?
138 2015-03-17 22:00:16 <LeMiner> nah, it's empty. But keys are loaded i suppose
139 2015-03-17 22:00:39 <sipa> if you don't need the wallet, just delete the 'database' directory and wallet.dat
140 2015-03-17 22:01:07 <LeMiner> the database directory is in chainstate right?
141 2015-03-17 22:01:11 <sipa> no
142 2015-03-17 22:01:30 <sipa> in the datadir
143 2015-03-17 22:01:34 <sipa> it's possible you have none
144 2015-03-17 22:01:46 <LeMiner> yep, none found
145 2015-03-17 22:02:00 <LeMiner> lets see what it does now
146 2015-03-17 22:03:11 <LeMiner> fixed
147 2015-03-17 22:03:12 <LeMiner> ty :)
148 2015-03-17 22:03:21 <LeMiner> so it was a corrupt wallet file?
149 2015-03-17 22:03:36 <sipa> yes
150 2015-03-17 22:03:59 <LeMiner> great, ty man
151 2015-03-17 22:10:25 <sipa> phantomcircuit: there is still the risk of being fed many low-difficulty headers
152 2015-03-17 22:10:29 <phantomcircuit> <phantomcircuit> so it seems that removing the checkpoints isn't something that is viable (at least not without either functional fraud proofs or a massive performance hit)
153 2015-03-17 22:10:32 <phantomcircuit> <phantomcircuit> the smallest obviously safe step forward would be to simply remove them from the consensus path and only skip script checks if the block is int he same chain as the last checkpoint
154 2015-03-17 22:10:35 <phantomcircuit> <sipa> i would like that
155 2015-03-17 22:10:37 <phantomcircuit> <phantomcircuit> note this means it would be safe to have a single checkpoint instead of multiple
156 2015-03-17 22:10:43 <petertodd> phantomcircuit: so my proposal then? I'll go for that :P
157 2015-03-17 22:10:52 <sipa> petertodd: what is your proposal?
158 2015-03-17 22:11:01 <phantomcircuit> basically what i just said
159 2015-03-17 22:11:13 <phantomcircuit> i may or may not be appropriating his plan
160 2015-03-17 22:11:19 <petertodd> sipa: but not as well worded :P
161 2015-03-17 22:11:22 <sipa> and that still means you can be bed a large amount of low-difficulty headers filling your ram
162 2015-03-17 22:11:28 <sipa> *fed
163 2015-03-17 22:11:51 <sipa> (not saying there are no solutions for that, but you do have to formulate one :p)
164 2015-03-17 22:12:16 <petertodd> sipa: I think we can live with some ancient checkpoints IMO, but I'm a pragmatist
165 2015-03-17 22:12:45 <petertodd> sipa: eventually good to have at least an interactive total work commitment scheme
166 2015-03-17 22:12:49 <phantomcircuit> sipa, you can do that even as it is now by messing with the timestamps
167 2015-03-17 22:13:57 <sipa> i wonder whether checkpoints can just be reduced to a means of checking whether we have a headers chain that is unlikely to be a fake one, up to some reasonable timestamp (basically measuring non-sybilness)
168 2015-03-17 22:14:18 <sipa> and then just use pow and time comparisons with the best known chain
169 2015-03-17 22:15:39 <petertodd> an interactive non-miner-committed total work scheme would be useful for SPV too - probably a better use of resources
170 2015-03-17 22:15:53 <sipa> elaorate
171 2015-03-17 22:15:58 <sipa> +b
172 2015-03-17 22:16:37 <phantomcircuit> sipa, actually has anybody really done the math on how many diff 1 shares you could possibly have mined?
173 2015-03-17 22:16:48 <phantomcircuit> if it's just the 350k we have now then
174 2015-03-17 22:16:49 <phantomcircuit> so what
175 2015-03-17 22:16:54 <sipa> heh?
176 2015-03-17 22:17:03 <petertodd> sipa: I claim to you that merkle tip XXX is the merkle sum tree of all block headers w/ total work, you sample parts of that tree to determine if I'm not lying - same end goal as the skip lists stuff really...
177 2015-03-17 22:17:16 <sipa> petertodd: ah yes
178 2015-03-17 22:17:29 <petertodd> phantomcircuit: ha, so have a max # of diff 1 shares you'll accept? nice
179 2015-03-17 22:17:30 <sipa> you can balance the tree based on work
180 2015-03-17 22:18:01 <sipa> basically a huffman tree, with probabilities proportional to the individual block's work
181 2015-03-17 22:18:05 <phantomcircuit> sipa, with headers first are you only asking 1 peer for the headers?
182 2015-03-17 22:18:12 <petertodd> sipa: you don't need to balance it, just make the sample algorithm be sum work biased - lets the actual tree storage be immutable
183 2015-03-17 22:18:40 <sipa> how about linear? :p
184 2015-03-17 22:18:52 <petertodd> sipa: huh?
185 2015-03-17 22:18:53 <sipa> phantomcircuit: initially, just one, in the end, from all
186 2015-03-17 22:19:08 <phantomcircuit> sipa, any reason for it to be just one?
187 2015-03-17 22:19:08 <sipa> petertodd: giving it the same shape as the blockchain itself :)
188 2015-03-17 22:19:22 <sipa> phantomcircuit: yes, otherwise you waste downloading all headers from every peer
189 2015-03-17 22:19:25 <petertodd> sipa: a MMR implementation is stored linearly
190 2015-03-17 22:19:32 <sipa> it's not much but it is 30 MB
191 2015-03-17 22:19:36 <phantomcircuit> sipa, i was more thinking round robin
192 2015-03-17 22:19:52 <sipa> phantomcircuit: i challenge you to implement that in a robust way
193 2015-03-17 22:19:58 <phantomcircuit> or is there some "best" peer selection going on
194 2015-03-17 22:20:39 <sipa> petertodd: yes i know; ignore me, i was joking
195 2015-03-17 22:21:23 <petertodd> sipa: it's almost like we need a special symbol to indicate that someone is joking over IRC... maybe... â?
196 2015-03-17 22:21:25 <sipa> petertodd: balancing would just be a constant-factor improvement to the number of average nodes to query
197 2015-03-17 22:22:10 <phantomcircuit> sipa, making me actually look at the code...
198 2015-03-17 22:22:24 <petertodd> sipa: sure about that? doesn't a sum-biased selection algorithm give you the same result?
199 2015-03-17 22:23:04 <sipa> petertodd: well reaching a leaf in an MMR will on average take log2(N) steps, with N the total number of commitments, right?
200 2015-03-17 22:23:09 <petertodd> sipa: yup
201 2015-03-17 22:23:41 <sipa> with a balanced tree, if the work is so high that one block has as much as all of history, the average would be 2 nodes to reach a leaf
202 2015-03-17 22:23:55 <sipa> so there is some example for which your statement is not true :)
203 2015-03-17 22:23:58 <petertodd> sipa: ah, I see what you mean, yeah that's true
204 2015-03-17 22:24:07 <sipa> not saying that's a problem
205 2015-03-17 22:24:23 <petertodd> sipa: where an MMR is always log2(N)
206 2015-03-17 22:24:48 <sipa> so 19-20 ish :)
207 2015-03-17 22:25:09 <petertodd> sipa: but balancing code is tricky, and if you end up committing it eventually, it's harder to change than changing the sampling code
208 2015-03-17 22:25:27 <phantomcircuit> petertodd, btw i think you can simplify your MMR stuff by defining a null node
209 2015-03-17 22:25:37 <phantomcircuit> gets rid of the mtr on top
210 2015-03-17 22:26:20 <petertodd> phantomcircuit: yes you can, but that gets rid of how recent indexes have particularly short proofs on average, and how in the MMR implementation in proofmarshal, you can efficiently concatenate MMR's together
211 2015-03-17 22:27:08 <petertodd> phantomcircuit: defining a null node mades it into a strict binary tree
212 2015-03-17 22:47:27 <phantomcircuit> sipa, heh i see there's some very basic "reasonable peer" selection
213 2015-03-17 22:51:51 <phantomcircuit> so the question is what's the time to process a block of "header" s
214 2015-03-17 22:52:47 <phantomcircuit> wait no it's not we're already processing the headers before sending another getheaders
215 2015-03-17 22:55:16 <sipa> indeed
216 2015-03-17 22:55:22 <sipa> validating headers is fast
217 2015-03-17 22:55:58 <phantomcircuit> sipa, so why do you think there would be an issue with round robin?
218 2015-03-17 22:56:28 <phantomcircuit> you might get a bad peer, but from what i can see you might get a bad first peer already anyways
219 2015-03-17 23:00:27 <sipa> phantomcircuit: if they disappear we pick another
220 2015-03-17 23:00:34 <sipa> i wonder what happens if they don't answer
221 2015-03-17 23:00:54 <phantomcircuit> im thinking we get stuck
222 2015-03-17 23:02:34 <sipa> well, need the tip we ask for headers from everyone
223 2015-03-17 23:03:04 <phantomcircuit> did you mis a word?
224 2015-03-17 23:03:38 <sipa> no, but i made a typo
225 2015-03-17 23:03:48 <sipa> s/need/near/
226 2015-03-17 23:04:28 <phantomcircuit> ah that makes a lot more sense
227 2015-03-17 23:04:45 <sipa> so if we're reasonably synced, there is little risk
228 2015-03-17 23:04:54 <sipa> but a bad peer could prevent our initial sync
229 2015-03-17 23:05:01 <sipa> (in many ways)
230 2015-03-17 23:06:31 <phantomcircuit> it looks like we get unstuck if a peer sends an inv for a block
231 2015-03-17 23:06:40 <phantomcircuit> any peer
232 2015-03-17 23:06:56 <phantomcircuit> actually... are you sure this doesn't ask all the peers for blocks?
233 2015-03-17 23:07:36 <phantomcircuit> it seems like if a peer sends an inv for a block you'll send them a getheaders which will trigger the request behavior for that peer also
234 2015-03-17 23:09:42 <phantomcircuit> hmm that might not hit the max headers limit
235 2015-03-17 23:13:02 <phantomcircuit> only one way to find out
236 2015-03-17 23:15:27 <phantomcircuit> well that was convenient
237 2015-03-17 23:15:41 <phantomcircuit> first peer was a hidden service that isn't responding to getheaders requests
238 2015-03-17 23:15:50 <phantomcircuit> and im stalled
239 2015-03-17 23:17:30 <sipa> phantomcircuit: an inv won't trigger a header fetch if we're not already synced
240 2015-03-17 23:17:48 <sipa> and if we are, we're already fetching headers from everyonr
241 2015-03-17 23:17:53 <sipa> plus, invs are rare
242 2015-03-17 23:21:23 <phantomcircuit> sipa, not so rare that they couldn't push you out of a stall
243 2015-03-17 23:21:34 <phantomcircuit> and are you sure about the first part?
244 2015-03-17 23:21:38 <sipa> no
245 2015-03-17 23:21:46 <phantomcircuit> if (!fAlreadyHave && !fImporting && !fReindex && !mapBlocksInFlight.count(inv.hash)) {
246 2015-03-17 23:22:02 <phantomcircuit> none of those would stop you from sending the getheaders
247 2015-03-17 23:22:13 <sipa> ah yes
248 2015-03-17 23:22:34 <sipa> only the getdata is done in case we believe we're close to done
249 2015-03-17 23:23:27 <phantomcircuit> yeah
250 2015-03-17 23:23:45 <phantomcircuit> so a random inv for a block would cause you to do headers sync with more than 1 peer i think
251 2015-03-17 23:25:40 <phantomcircuit> ha yeah it does
252 2015-03-17 23:26:28 <phantomcircuit> i've got "headers" responses here from each of my peers
253 2015-03-17 23:27:09 <phantomcircuit> sipa, so thoughts?
254 2015-03-17 23:27:27 <sipa> i guess that's good
255 2015-03-17 23:43:12 <phantomcircuit> sipa, ProcessMessage "headers" section the UpdateBlockAvailability(pfrom->GetId(), pindexLast->GetBlockHash());
256 2015-03-17 23:43:19 <phantomcircuit> shouldn't that be for each header not just the last?
257 2015-03-17 23:49:27 <sipa> well they are required to be in order
258 2015-03-17 23:49:38 <sipa> processing the last one is good enough :)
259 2015-03-17 23:54:07 <phantomcircuit> sipa, oh i see i misunderstood what it was doing
260 2015-03-17 23:54:11 <phantomcircuit> nvm
261 2015-03-17 23:57:49 <phantomcircuit> sipa, yeah the current behavior mostly does the 1 peer headers thing because it only takes seconds to get all the headers
262 2015-03-17 23:57:58 <phantomcircuit> so it's unlikely that an inv block message will be received before that
263 2015-03-17 23:58:09 <sipa> it's more like a minute to sync
264 2015-03-17 23:58:26 <phantomcircuit> depends on latency and stuff but yeah
265 2015-03-17 23:58:38 <phantomcircuit> but once you trigger the other peers you end up getting duplicates from all of them