1 2015-09-02 00:08:56 <kanzure> two soft-fork proposals to make use of OP_NOPs https://bitcointalk.org/index.php?topic=1007831.0 and another one here https://bitcointalk.org/index.php?topic=1106586.msg11771108#msg11771108 basically one was to flip the behavior of some OP_NOPs and another one is to extend the OP_NOP set
2 2015-09-02 01:39:13 <Luke-Jr> kanzure: thanks for doing this research btw
3 2015-09-02 01:41:15 <kanzure> Luke-Jr: i have some notes at the bottom of http://gnusha.org/logs/2015-09-01.log about an irc log minimizer i am about to start writing in a few minutes, input appreciated... (i am going to read -dev and -wizards logs, but really i need to reduce the data set down from ~10 million lines)
4 2015-09-02 01:41:22 <kanzure> would appreciate input on how to do this sanely
5 2015-09-02 01:42:29 <kanzure> also i just finished reading through all of the interesting-looking bitcointalk.org technical subforum threads (created bookmarks and documented things, etc). so ready to move on to irc logs..
6 2015-09-02 01:42:31 <InternetFriend> I'm working on an experiment, and I can't seem to find any resources on modifying the genesis block of a bitcoin fork to create a new blockchain. Anyone have any suggestions on where to look in the code?
7 2015-09-02 01:42:44 <InternetFriend> I'd be really appreciative of any advice
8 2015-09-02 01:43:40 <jcorgan> you could look at any of dozens of altcoins to see how they did it
9 2015-09-02 01:43:58 <InternetFriend> the genesis implementation has changed in the more recent versions of bitcoin core
10 2015-09-02 01:44:08 <InternetFriend> I actually made an alt with a friend a few years ago
11 2015-09-02 01:44:14 <InternetFriend> but the implementation in core is far different now
12 2015-09-02 01:45:07 <Luke-Jr> kanzure: oh, that's what you wanted logs for! did you get them? or shall I dig mine out?
13 2015-09-02 01:45:29 <kanzure> still need
14 2015-09-02 01:45:30 <Luke-Jr> InternetFriend: ##altcoin-dev
15 2015-09-02 01:45:38 <InternetFriend> thanks much :D
16 2015-09-02 01:45:52 <Luke-Jr> kanzure: can I trust you not to leak anything I accidentally have sensitive? :P
17 2015-09-02 01:46:15 <Luke-Jr> kanzure: you want just -dev and -wizards, or #bitcoin also?
18 2015-09-02 01:46:17 <kanzure> huh, alright. maybe i can filter that stuff out?
19 2015-09-02 01:46:30 <Luke-Jr> hmm
20 2015-09-02 01:46:39 <kanzure> i would like -dev if you have early -dev stuff, and if you have good -wizards logs that would be great. andytoshi's logs are like 3 simultaneous different formats.
21 2015-09-02 01:47:06 <Luke-Jr> [Wednesday, September 02, 2015] [1:46:30 AM] <Luke-Jr> hmm
22 2015-09-02 01:47:08 <Luke-Jr> [Wednesday, September 02, 2015] [1:46:35 AM] <-> kanzure> test PM
23 2015-09-02 01:47:47 <kanzure> what about my reply?
24 2015-09-02 01:47:52 <Luke-Jr> that's in another og
25 2015-09-02 01:47:53 <Luke-Jr> log
26 2015-09-02 01:47:57 <kanzure> oh that's not bad then
27 2015-09-02 01:48:16 <kanzure> grep -v "^<-> "
28 2015-09-02 01:50:00 <Luke-Jr> #bitcoin-mining ?
29 2015-09-02 01:50:19 <kanzure> up to you
30 2015-09-02 01:50:25 <Luke-Jr> prob not worth it
31 2015-09-02 01:50:41 <kanzure> i am hunting for wizard magic stuff
32 2015-09-02 01:50:48 <kanzure> merged mining sorta counts
33 2015-09-02 01:50:51 <Luke-Jr> #bitcoin-fpga maybe
34 2015-09-02 01:51:56 <Luke-Jr> #bitcoin-tech might be worth something
35 2015-09-02 01:56:56 <Luke-Jr> heh #bitcoin-wasteawaythedayarguing
36 2015-09-02 01:59:14 <kanzure> for #bitcoin-dev i have everything after 2013-03-12, so earlier stuff would be useful. for -wizards anything from before 2014-02-23 back to origins (2013?) would be useful (or more).
37 2015-09-02 01:59:44 <kanzure> andytoshi wizard logs are spotty, weird timestamps, weird formatting problems.
38 2015-09-02 02:01:42 <Luke-Jr> kanzure: is it okay if there's an out-of-order day in here?
39 2015-09-02 02:03:13 <kanzure> probably
40 2015-09-02 02:08:04 <Luke-Jr> kanzure: do you have IPv6? :p
41 2015-09-02 02:08:20 <kanzure> yes
42 2015-09-02 02:08:42 <Luke-Jr> k, I'll DCC them then
43 2015-09-02 02:09:08 <kanzure> python3 -m http.server
44 2015-09-02 02:11:56 <kanzure> might not be ipv6 from this irc client :-)
45 2015-09-02 02:12:22 <Luke-Jr> o
46 2015-09-02 02:12:33 <kanzure> dcc said can't connect, i'll assume ipv6 incompatibility
47 2015-09-02 02:13:16 <Luke-Jr> kanzure: 0E4C A12B E16B E691 56F5 40C9 984F 10CC 7716 9FD2 ?
48 2015-09-02 02:15:35 <kanzure> yeah sorry about that, ipv4 only for the moment
49 2015-09-02 02:17:32 <Luke-Jr> kanzure: 0E4C A12B E16B E691 56F5 40C9 984F 10CC 7716 9FD2 is your correct key?
50 2015-09-02 02:20:39 <kanzure> yes
51 2015-09-02 02:22:42 <Luke-Jr> ~13 min ETA on upload
52 2015-09-02 02:23:00 <kanzure> you must be using a potato for an internet connection :-)
53 2015-09-02 02:23:11 <Luke-Jr> best I can get :<
54 2015-09-02 02:23:21 <Luke-Jr> it's only 50 MB too
55 2015-09-02 02:38:06 <Luke-Jr> kanzure: sure PM
56 2015-09-02 02:38:08 <Luke-Jr> see PM*
57 2015-09-02 02:39:10 <kanzure> thank you
58 2015-09-02 04:50:31 <andytoshi> kanzure: should only be two formats fwiw, my weird weechat format and petertodd's format
59 2015-09-02 04:50:45 <andytoshi> oh, there is also that weird HTML format when i was sed'ing my weechat logs into HTML
60 2015-09-02 04:51:16 <morcos> wumpus: in the interest of covering all bases, i went ahead and pushed up my suggested short term improvement to fee estimation. it's really just a constant change and recompile, but then the unit tests break, not really designed robustly i guess.
61 2015-09-02 04:52:18 <morcos> anwyay, in case people start complaining about fee estimation in the face of a big stress test while i'm gone, this does make a difference. #6618
62 2015-09-02 05:26:06 <Eliel> is there an address spec designed to contain ECDSA public key?
63 2015-09-02 05:28:43 <gmaxwell> Eliel: the xpub one in BIP32? really an ECDSA public key itself is never an 'address' in the bitcoin system.
64 2015-09-02 05:29:21 <gmaxwell> An address is always a compressed/templated representation of a scriptpubkey. A EC pubkey alone doesn't tell you what kind of script it should be used with (unless something about the encoding does by convention)
65 2015-09-02 05:31:26 <Eliel> Ok, is there some reason you'd need to know the public key (in addition to a bitcoin address) to use it in a multisig?
66 2015-09-02 05:38:17 <Eliel> https://bitcoin.org/en/developer-guide#standard-transactions This gives the idea that addresses are useless for multisig, so I'm kind of wondering how come there's no address format specified for them. Unless that's what the xpub format is for.
67 2015-09-02 05:39:09 <gmaxwell> Multisig payments use p2sh almost ubiquitiously.
68 2015-09-02 05:39:16 <gmaxwell> You'd need custom software to do otherwise.
69 2015-09-02 05:50:25 <CodeShark> address formats are legacy at this point - there's practically no reason to use anything other than p2sh
70 2015-09-02 05:50:34 <CodeShark> other than historical reasons
71 2015-09-02 05:56:34 <CodeShark> hopefully we'll get rid of the whole "standard transaction" thing too sooner or later :)
72 2015-09-02 05:59:47 <CodeShark> what we really want are proofs of program correctness and good bounds on computational cost for script evaluation
73 2015-09-02 06:38:41 <phantomcircuit> gmaxwell, btw it seems like we hold cs_main for a very long time when validating groups of blocks
74 2015-09-02 06:38:54 <phantomcircuit> it's pretty bad for rpc responsiveness
75 2015-09-02 06:39:33 <CodeShark> rpc locking inefficiency is a longstanding issue
76 2015-09-02 06:39:45 <gmaxwell> Yea, I've noticed that.
77 2015-09-02 06:39:51 <gmaxwell> CodeShark: nothing to do with RPC there.
78 2015-09-02 06:40:34 <CodeShark> no?
79 2015-09-02 06:41:46 <CodeShark> they will contend with each other - but the locks can just be around the atomic operations
80 2015-09-02 06:42:10 <gmaxwell> CodeShark: in that case the complain is that during a sync or reorg when we verify a group of blocks we hold locks for a long time without releasing, bad latency. Fixing that is clearly good and will help everything including RPC have lower latency.
81 2015-09-02 06:43:12 <gmaxwell> CodeShark: most things we do with the blockchain are fast. e.g. the indivigual operations are much cheaper than the locking (by orders of magnitude) in most cases, where they're not I agree they should be broken up.
82 2015-09-02 06:43:19 <CodeShark> right - I follow. you're right, RPC isn't the fundamental issue...but RPC locking inefficiency is a symptom of similar habits
83 2015-09-02 06:43:30 <gmaxwell> yea okay, agreement then on those points.
84 2015-09-02 06:46:19 <gmaxwell> Just keep in mind that the tiers of operation costs go something like bitops, multiples, divides, memory access, pointer-chasing, locking (especially contended), network/disk access .. and basically you can do almost any amount of a lower item on that kind of hierarchy to replace a higher item and have it be a win. (well obviously, thats not true in an absolute sense, but its the right intution)
85 2015-09-02 06:46:43 <gmaxwell> Each one is another order of magnitude in cycle timees (corrected for pipeline throughput and such)
86 2015-09-02 06:51:54 <CodeShark> the name cs_main itself nicely reflects the problem - the code was designed more for simplicity in logic and avoiding difficult synchronization issues than for performance
87 2015-09-02 06:53:10 <CodeShark> which is quite sensible in many applications
88 2015-09-02 06:53:17 <gmaxwell> sure it's a big global lock, but we are much more fine grained now.. but when it comes to the blockchain itself, well-- bitcoin core is a wrapper around a single big datastructure. :)
89 2015-09-02 06:54:24 <jonasschnelli> I once did pay around with reducing locks and making them more fine grained. But the main issue is probably avoidance of deadlocks
90 2015-09-02 06:54:49 <gmaxwell> And yes, an argument could be made that it would be sensible to allow efficient reader/writer access to the blockchain (e.g. you can get read-only access fast always, but you might have a slightly stale view) but I suspect that is not worth the implementation cost, though it would allow a massive increase in query throughput.
91 2015-09-02 06:55:27 <CodeShark> it depends on the level of concurrency we wish to support
92 2015-09-02 06:55:43 <CodeShark> if we've only got one client connecting at any given time it isn't really that huge an issue
93 2015-09-02 06:57:10 <CodeShark> and if we want an RPC that can support high concurrency it probably needs a significant redesign from what we currently have
94 2015-09-02 06:58:01 <gmaxwell> if the blockchain was RCUed plus the libevent stuff it would give high concurrency query access ::shrugs:: The amount of actually mutated blockchain data is quite small, so the overhead wouldn't be great (ignoring implementation risk and complexity).
95 2015-09-02 06:58:48 <gmaxwell> but I think it is not interesting right now, esp as the last I benchmarked most of the overhead in RPC performance was the json stuff (maybe fixed now?)
96 2015-09-02 06:59:21 <gmaxwell> as in, by bypassing the json stuff I got performance many times that which you'd get from using all your cores, even though I was just single threaded.
97 2015-09-02 07:00:02 <CodeShark> really? hmmm
98 2015-09-02 07:00:11 <CodeShark> the bottleneck is json parsing?!?!
99 2015-09-02 07:02:38 <jonasschnelli> JSON parsing is very fast (and very unlikely a bottlenet) with jgarziks UniValue parser/encoder
100 2015-09-02 07:02:44 <jonasschnelli> *bottleneck
101 2015-09-02 07:03:00 <CodeShark> I wouldn't think it would be a bottleneck
102 2015-09-02 07:03:17 <gmaxwell> yea, univalue may well have fixed it.
103 2015-09-02 07:03:30 <gmaxwell> Should retest. :)
104 2015-09-02 07:04:14 <gmaxwell> but at one point the json handling was the vast majority of time spent for I think every slow RPC except getblocktemplate.
105 2015-09-02 07:06:54 <phantomcircuit> lol
106 2015-09-02 07:07:08 <phantomcircuit> the stuff before univalue wasn't even O(n) for encoding
107 2015-09-02 07:07:12 <phantomcircuit> nuisance
108 2015-09-02 07:08:59 <jonasschnelli> for performance tests (especially wallet) it would be nice to have the mainnet chain (clone) with some test-only-adaption that would allow creating transactions with any addresses (pass validation at serval points) as well as significantly drop difficulty to allow the use of "generate"...
109 2015-09-02 07:10:09 <gmaxwell> jonasschnelli: I am not seeing the distinction you're making with regtest.
110 2015-09-02 07:10:45 <jonasschnelli> gmaxwell: it would probably be very difficult to get a restest chain with aprox. the same size/complexity of the main chain?
111 2015-09-02 07:10:54 <jonasschnelli> s/restest/regtest
112 2015-09-02 07:11:01 <gmaxwell> I mean it doesn't bypass validation, (really not fond of littering utterly security critical code with more conditionals) but thats not needed for testing.
113 2015-09-02 07:11:15 <gmaxwell> oh I see. well no, its not so hard and it could be reused. We could put up an archive.
114 2015-09-02 07:11:30 <gmaxwell> But another thing you can do is jigger IsMine in the wallet logic for wallet performance testing.
115 2015-09-02 07:11:47 <gmaxwell> Back when the blockchain was only 2GB I put the whole thing in my wallet. Worked! shockingly. :)
116 2015-09-02 07:11:50 <jonasschnelli> example: i was testing some wallet performance. 100'000 transactions. But in regtest, most blocks did hold 1-4 transactions,.. and height is about 5000. So. Not very representative in terms of performance tests.
117 2015-09-02 07:11:57 <gmaxwell> I ... am doubtful it would work now for the whole thing. :)
118 2015-09-02 07:12:44 <jonasschnelli> If i could copy the main chain, find a way of using already existing address to perform/create new transactions with them (bypass some sort of validation), that would be representative in terms of performance.
119 2015-09-02 07:13:50 <gmaxwell> Though bypassing the validation likely makes it non-representaive. In any case, you can import arbritary addresses easily. make transaction with them is also a trivial change, I don't htink for testing you actually need to resulting txn to get accepted.
120 2015-09-02 07:13:57 <CodeShark> OP_CHECKSIG can have an if (fBypass) return truel at the end :p
121 2015-09-02 07:14:07 <gmaxwell> e.g. you'd want to test the select coins speed, but who cares if the txn verifies?
122 2015-09-02 07:14:48 <jonasschnelli> My main focus: test wallet peformance (listtransactions, listunspent) UI/QT performance..., coin-selection, etc.
123 2015-09-02 07:15:00 <jonasschnelli> This is hard to test on regtest.
124 2015-09-02 07:15:19 <jonasschnelli> Or lets say, ... hard to measure performance
125 2015-09-02 07:16:21 <gmaxwell> except for coin-selection you can just import arbritary keys on mainnet. Somewhere I think I have a wallet that thinks it has the satoshidice keys.
126 2015-09-02 07:19:14 <jonasschnelli> Yes... that's right. Maybe even coin-selection could work with watch-only after some simple code changes (for testing purpose)
127 2015-09-02 07:23:34 <jonasschnelli> shortly time profiled the new libevent RPC server with 10'000 "getchaintips" requests with 10 parallel threads...
128 2015-09-02 07:23:56 <jonasschnelli> 93.1% time spent in WorkQueue<HTTPClosure>::Run()
129 2015-09-02 07:24:13 <jonasschnelli> 88.9% spent in getchaintips(UniValue const&, bool)
130 2015-09-02 07:24:28 <jonasschnelli> The whole httpd overhead (including json) is minimal
131 2015-09-02 07:24:53 <gmaxwell> spiffy.
132 2015-09-02 07:25:09 <gmaxwell> did you get a general throughput number while you were at it?
133 2015-09-02 07:25:38 <jonasschnelli> you mean in ms?
134 2015-09-02 07:27:03 <gmaxwell> well how long did it take you to complete all 10,000 queries?
135 2015-09-02 07:27:16 <jonasschnelli> Time taken for tests: 18.636 seconds
136 2015-09-02 07:27:22 <jonasschnelli> Time per request: 1.864 [ms] (mean, across all concurrent requests)
137 2015-09-02 07:27:25 <jonasschnelli> (apache bench)
138 2015-09-02 07:28:05 <jonasschnelli> but it's self compiled bitcoind with --enable-debug, etc. ... my test did show significant performance differences when built over gitian.
139 2015-09-02 07:28:39 <jonasschnelli> could be because i'm using a different libevent locally.
140 2015-09-02 07:29:51 <jonasschnelli> I have also posted some stats from the "old/new" RPC some days ago (including the ab results): https://github.com/bitcoin/bitcoin/pull/5677#issuecomment-135964028
141 2015-09-02 07:30:09 <gmaxwell> Thanks! yes, I understood that was a debugging build. I was just trying to get what order or magnitude it was in.
142 2015-09-02 07:32:05 <jonasschnelli> Whole test are not very representative, only done on my local macbook... but gives a direction.
143 2015-09-02 07:32:57 <jonasschnelli> IMO, the rpc server itself is very fast, parsing/http overhead tiny. As mentioned earlier by gmaxwell and CodeShark: locking is probably the main issue when facing laggy responses.
144 2015-09-02 07:34:44 <jonasschnelli> I'm currently working on a option for limiting network usage for bitcoind...
145 2015-09-02 07:35:00 <gmaxwell> There is particular instrumentation we could do to log cases where locks are held for an absolute long time.
146 2015-09-02 07:35:11 <gmaxwell> Which would probably be pretty informative.
147 2015-09-02 07:35:16 <jonasschnelli> I now try to implement a limit (x MB), if reached, it will not response to getblocks on historical blocks...
148 2015-09-02 07:35:18 <jonasschnelli> any other ideas?
149 2015-09-02 07:38:01 <gmaxwell> There are several distinct limit motivations. One is obeying ISP bandwidith caps to prevent high bills. Another is avoiding using lots of rate to avoid displacing other apps, a special case of this is not triggering buffer bloat by cramming a lot out at once.
150 2015-09-02 07:39:14 <gmaxwell> I'm aware of three basic strategies to shunt load: Stop serving the traffic (which should also come with a disconnect to avoid accidentally DOS attacking), to delay announcement (not applicable for history but for new blocks/txn you can greatly decrease your bandwidth usage by delaying announcements), and by token bucket rate shaping the sockets.
151 2015-09-02 07:39:56 <phantomcircuit> gmaxwell, lol actually, this is holding locks for so long that im losing network connections on the rpi2
152 2015-09-02 07:40:07 <gmaxwell> The ISP cap stuff basically always needs some kind of time window management. Tor has useful functionality for that.
153 2015-09-02 07:41:09 <jonasschnelli> ISP cap can probably done by other tools/hardware. I think the approach of not serving historical blocks after a limit has reached would solve most "hight bill" issues.
154 2015-09-02 07:42:23 <jonasschnelli> It would not be possible to set a hard cap, but at least you could say "serve up to 1GB of historical blocks per day, than stop serving blocks". But on top of that 1GB one would still have the "normal relay traffic"
155 2015-09-02 07:44:05 <jonasschnelli> clarification: "serve up to 1GB of historical blocks per day, than stop serving **historical** blocks"
156 2015-09-02 07:44:27 <jonasschnelli> serving blocks > now-1day would always happen
157 2015-09-02 07:44:51 <gmaxwell> I followed that, glad you mentioned per day as I was about to ask. I assume it would just instantly disconnect a peer if they tried and you were beyond limit?
158 2015-09-02 07:45:20 <gmaxwell> jonasschnelli: I showed you those old logs of how old blocks peers request right?
159 2015-09-02 07:45:50 <gmaxwell> basically at least a couple years ago requests going back a week or two were pretty frequent compared to IBD.
160 2015-09-02 07:45:53 <jonasschnelli> gmaxwell: maybe not disconnect. I though we could do the same as if one had -pruning enabled: Just don't response to the getblock in a such case.
161 2015-09-02 07:46:33 <gmaxwell> No you will dos attack nodes by doing that. Pruning doesn't advertise node-network, so even remotely well implemented software won't get tripped up.
162 2015-09-02 07:46:54 <gmaxwell> 0.10+ bitcoin core might survive the non-response okay, but other software probably won't.
163 2015-09-02 07:47:25 <jonasschnelli> gmaxwell: so better fDisconnect=true in case of a node requesting historical blocks when the limit is reached?
164 2015-09-02 07:48:06 <wumpus> as there is no way to change node flags on the fly without disconnecting
165 2015-09-02 07:48:09 <jonasschnelli> Yeah. Makes more sense. The peer than could go on ask different nodes for blocks.
166 2015-09-02 07:49:16 <gmaxwell> Yea. I'm still thinking a total usage cap would be better and not much harder than you're thinking. How about measure total out, and when you get within 4*max_blocksize*blocks_remaining you turn off historical, and at 0 you shut off listening?
167 2015-09-02 07:49:48 <gmaxwell> Then it would be pretty likely to actually meet the target as it would be counting all usage. Users are not going to be able to usefully reason aboue X amount of history.
168 2015-09-02 07:50:19 <gmaxwell> But they know their monthy cap is Y. And if we shut off at some reasonable multiple of the remaining blocks we'll likely not go over.
169 2015-09-02 07:52:14 <jonasschnelli> I have also thought about that. Maybe 0.7GB/day soft limit (stop serving historical blocks), 1GB/day hard limit (disconnect all nodes, stop listening/connecting). Limits would be configurable. But auto-setting the soft-limit (0.7GB in our example) based on blocksize would still be inaccurate because one does not know how many node he will talk to and therefore how many blocks/tx he might rely.
170 2015-09-02 07:52:18 <gmaxwell> I think just moving where its accounted at lest doesn't change the complexity and makes the behvior more useful.
171 2015-09-02 07:53:13 <gmaxwell> hm. causing nodes to partition themselves on bandwidth usage isn't ideal.
172 2015-09-02 07:53:27 <gmaxwell> jonasschnelli: well I was trying to avoid over complexifying the initial work there.
173 2015-09-02 07:53:52 <gmaxwell> What it should also do at that point is delay all inv, this will make other peers unlikely to request data from them.
174 2015-09-02 07:54:16 <jonasschnelli> Why not just allowing to set -historicalblocklimit=1000MB? ... every other way of targeting the total bandwith max would be non trivial IMO
175 2015-09-02 07:54:25 <gmaxwell> e.g. if it's sending out INV 100ms later, then it never come in first unless no other peer offered the material.
176 2015-09-02 07:54:55 <jonasschnelli> interesting approach (INV delaying)
177 2015-09-02 07:55:38 <jonasschnelli> I just think we should not add hard limits in bitcoind itself. It might reduce the network quality in general.
178 2015-09-02 07:55:49 <gmaxwell> jonasschnelli: because its meaningless to the user. It's unrelated to their actual ISP bandwidth caps, and encourages setting it to random super low values, which could be harmful if nodes have to connect a zillion times to find someone with capacity left. I _think_ it would be no harder (perhaps easier) to use total bandwidth usage as the trigger, even if the remediation is purely history.
179 2015-09-02 07:56:55 <gmaxwell> E.g. targetmaxdailyoutputbound=x and it'll cut off history if it gets near it. and yea sure maybe it still blows way past if history wasn't the source of the issue.
180 2015-09-02 07:57:17 <gmaxwell> But thats also the case for -historicalblocklimit=1000MB.. but at least the setting is meaningful and we can improve it over time.
181 2015-09-02 07:58:00 <jonasschnelli> targetmaxdailyoutputbound sounds good. The difficult part is probably "[..]if it gets near it[..]" defining the "near".
182 2015-09-02 07:58:59 <wumpus> cutting off networking completely (even stop requesting new blocks) should normally be avoided I think. It's not nice to have to catch up afterwards. Although it could be an option ofc...
183 2015-09-02 07:59:16 <gmaxwell> Overall the network average number of blocks out per blocks in is only somewhat higher than 1.
184 2015-09-02 08:00:15 <gmaxwell> well the minimum outbound traffic to stay in sync is ~0. The minimum outbound to both stay in sync and not be a network leech is ~= maximum daily blocksize *2.
185 2015-09-02 08:00:26 <gmaxwell> (*2 because of the block + transaction overhead)
186 2015-09-02 08:00:55 <wumpus> right
187 2015-09-02 08:01:11 <gmaxwell> so there are in my view four activity levels; "full", "no history", "1:1", "node network is off".
188 2015-09-02 08:01:16 <wumpus> but if this DoSing w/ historical blocks requests keeps up I expect people to turn it off at some point, so having an option to still provide X MB per day is nice
189 2015-09-02 08:01:36 <gmaxwell> 1:1 meaning that you still share copies of blocks and transactions but only transmiting as much as you took in.
190 2015-09-02 08:01:42 <jonasschnelli> so if -targetmaxdailyoutputbound is reached, stop doing any outbound traffic unless it's a "getdata", "getaddr" command?
191 2015-09-02 08:03:32 <gmaxwell> jonasschnelli: well I think what I would suggest is that if you get within e.g 4x the maximum block sizes, you switch to no history, and if you exhaust, you turn off listening and node-network.. so you stay connected but using absolutely minimum bandwidth.
192 2015-09-02 08:04:04 <gmaxwell> though bleh, localhost should be excempted at least (perhaps whitebind?)
193 2015-09-02 08:04:39 <wumpus> if you exclude localhost, tor will be excluded too. Agree that whitebind should be excluded.
194 2015-09-02 08:05:35 <jonasschnelli> haha
195 2015-09-02 08:06:16 <gmaxwell> not excluding localhost means things like p2pool will break, ... arguably it should use whitebind, but whitebind has an obnoxious behavior of bypassing the mempool for relay which wastes a ton of bandwidth if coupled with relay network client or p2pool.
196 2015-09-02 08:06:18 <wumpus> it's exactly what whitebind is for
197 2015-09-02 08:06:29 <jonasschnelli> wumpus: but if we measure traffic based on CNode, we can distinct between localhost-non-tor-peers and tor-peers?
198 2015-09-02 08:06:46 <gmaxwell> (the mempool bypass is useful for basically armory and a few 'firewallish' applications, but not generally)
199 2015-09-02 08:07:02 <wumpus> jonasschnelli: you can't distinguish between them, both kind of connections come from localhost
200 2015-09-02 08:07:10 <gmaxwell> jonasschnelli: we can't but we should solve that generally for other reason.
201 2015-09-02 08:07:29 <wumpus> it may be possible to ask *tor* to connect to a different port though
202 2015-09-02 08:07:42 <wumpus> especially with the torcontrol class creating your hidden service
203 2015-09-02 08:08:01 <gmaxwell> Yes, or introduce yet another kind of whitebind. that doesn't overlay the funky relay policy.
204 2015-09-02 08:09:17 <wumpus> there's currently the question there 'if there are multiple binds, what port do I give Tor'? automatically creating a binding specifically for tor in the hidden service logic would be a small step further
205 2015-09-02 08:09:30 <gmaxwell> Yes I think a bind specifically for tor is good.
206 2015-09-02 08:09:36 <jonasschnelli> [...] you turn off listening and node-network [...]: can node-network be turned of without disconnecting a node?
207 2015-09-02 08:09:46 <gmaxwell> no.
208 2015-09-02 08:10:02 <gmaxwell> though you can stop sending invs.
209 2015-09-02 08:10:16 <gmaxwell> and you're already going to disconnect on the first getblock..
210 2015-09-02 08:10:17 <wumpus> then again I don't like treating localhost specially
211 2015-09-02 08:10:20 <wumpus> I really don't like it
212 2015-09-02 08:10:34 <wumpus> whitebind was invented for 'special snowflake' connections
213 2015-09-02 08:10:39 <wumpus> whereever they come from
214 2015-09-02 08:10:52 <gmaxwell> wumpus: see my prior complaint about existing functionality overloads on whitebind.
215 2015-09-02 08:11:06 <wumpus> so if the funky relay policy is a problem, let's add an option to disable that
216 2015-09-02 08:11:37 <gmaxwell> maybe add a network call to request it, and accept those calls only from whitebound hosts?
217 2015-09-02 08:11:38 <wumpus> yes I get your point I just don't like the solution of 'treat localhost special, except if torbind'
218 2015-09-02 08:12:09 <gmaxwell> wumpus: well it is somewhat special. Bandwidth to it is free in any case we care to address.
219 2015-09-02 08:12:10 <wumpus> there are so many arbitrary rules already with regard to networking
220 2015-09-02 08:12:21 <wumpus> oh sure, you can argue for it...
221 2015-09-02 08:12:24 <gmaxwell> I know.
222 2015-09-02 08:12:58 <wumpus> people may have set other kinds of tunnels as well that appear to connect from localhost
223 2015-09-02 08:13:02 <jonasschnelli> In a "full", "no history", "node network is off", can't we just say in "node network is off"-mode we keep some connections to peers with NODE_NETWORK and just try to keep the chain in sync?
224 2015-09-02 08:13:02 <wumpus> tor is not the only exception
225 2015-09-02 08:13:11 <wumpus> in general, being IP neutral is good
226 2015-09-02 08:13:25 <jonasschnelli> s/with NODE_NETWORK/with NODE_NETWORK OFF
227 2015-09-02 08:14:02 <gmaxwell> jonasschnelli: yes, I think existing connections could stay put. Insteead "node network is off" could mean e.g. INV delay.
228 2015-09-02 08:14:22 <gmaxwell> or nothing at all special in the first implementation.
229 2015-09-02 08:14:41 <jonasschnelli> but INV delay can't really prevent then from making more bandwidth and getting high bills?
230 2015-09-02 08:14:48 <gmaxwell> (INV delay is annoying to implement)
231 2015-09-02 08:14:54 <jonasschnelli> very unlikely but could still happen
232 2015-09-02 08:15:36 <gmaxwell> If they are malicious, no. The nonmalicious case is if you end up healing a partition. My view is that it's okay to very rarely go over to heal a partition.
233 2015-09-02 08:15:38 <jonasschnelli> Okay.. i'll aim for completely disable the network in case of "node network is off" is reached.
234 2015-09-02 08:16:09 <wumpus> depends on how you call it. if you call it a hard limit, you should never go over it
235 2015-09-02 08:16:16 <gmaxwell> Malicious is more tricky. though we could just hangup if they try doing any thing incompatible with node-network-off.
236 2015-09-02 08:17:19 <jonasschnelli> A hard limit is probably something node operators like. But not sure if it could hard network in general. I somehow was trying to avoid hard limits, to only add options to "optimize" the bandwidth but not limit it...
237 2015-09-02 08:17:22 <gmaxwell> e.g. if you've run out, they're still technically node network because we haven't disconnected them, but we stop sending INVs (except for our own txn) and if they getdata anything we haven't INVed or getmempool or do anything other than INV and ping and respond to our getdatas we disconenct them?
238 2015-09-02 08:17:27 <wumpus> better to lose functionality in that case, even to quit the client, than e.g. get a bandwidth bill. But if the limit is more advisory it matters less.
239 2015-09-02 08:18:14 <gmaxwell> wumpus: I think the most useful thing is a limit which switches to great mitigation early enough that it cannot go over absent some dos attack or screwup on our part.
240 2015-09-02 08:18:14 <wumpus> in any case it's important to document that distinction well
241 2015-09-02 08:18:25 <gmaxwell> Is that a hard limit or not?
242 2015-09-02 08:18:31 <wumpus> no, that's not a hard limit
243 2015-09-02 08:19:01 <gmaxwell> Limits that makes nodes drop off the network can be pretty harmful to the user however. And keep in mind even if we shut down, a DOS attack still wastes bandwidth.
244 2015-09-02 08:19:18 <gmaxwell> I agree with what you're saying, just trying to feel this out.
245 2015-09-02 08:19:30 <jonasschnelli> If one like to make a hard limit, he could use network tools. I think we should not encourage the use of hard limits.
246 2015-09-02 08:19:32 <wumpus> a hard limit is a dumb counter that counts bandwidth and cuts off when it is exceeded. It's goal is to avoid extra costs or other problems at all costs, even to functionality.
247 2015-09-02 08:19:44 <wumpus> so if you don't want that, don't call it that
248 2015-09-02 08:19:55 <gmaxwell> E.g. "sure I want a hard limit.. oh hey how did my node get partitioned right after a payment to me that was reversed but I didn't see the loss of confirmation!?"
249 2015-09-02 08:19:59 <jonasschnelli> We should add tools that can prevent of reaching ISP limits but not really garantues them.
250 2015-09-02 08:20:21 <jonasschnelli> That's why i had -historicalblocklimit in mind.
251 2015-09-02 08:20:37 <gmaxwell> maxuploadtarget is I think not confusing.
252 2015-09-02 08:20:38 <wumpus> gmaxwell: it'd be mostly for users that run a node on a VPS to run a node, not have any critical use for it
253 2015-09-02 08:21:05 <gmaxwell> We'll try to get that target, but no promise. (though I think in practice, we'll respect it quite well)
254 2015-09-02 08:21:11 <wumpus> same as for tor relays basically
255 2015-09-02 08:21:44 <gmaxwell> difference being a tor relay hitting a 'hard' limit and partitioning it doesn't potentially get the user robbed.
256 2015-09-02 08:21:50 <wumpus> although even for tor the relay funcitionality is separate; e.g. if it reached the limit it will still route *your* traffic
257 2015-09-02 08:22:08 <gmaxwell> yea, tor has a seperate limiter for local
258 2015-09-02 08:22:25 <wumpus> so that's where the whitebind would come in in our case
259 2015-09-02 08:23:36 <wumpus> I think it's fair that whitelisted can ignore the limit, they're not created by default, so the whitelist documentation could include that warning
260 2015-09-02 08:24:00 <jonasschnelli> Agreed.
261 2015-09-02 08:24:08 <gmaxwell> Yep.
262 2015-09-02 08:24:41 <gmaxwell> Still can't put relaynetworkclient and p2pool behind whitebind. Not sure how to fix without something thats ugly or breaks existing whitebind users. :-/
263 2015-09-02 08:25:39 <gmaxwell> lol
264 2015-09-02 08:26:05 <jonasschnelli> So any objections if i start implementing (a first implementation) "-maxuploadtarget" (exclude -whitebind-ed nodes) with two modes ["full"] and ["no history"]. No history gets triggered if <target> - <tbd>*MAXBLOCKSIZE have reached?
265 2015-09-02 08:26:10 <gmaxwell> I've heard that in some cultures white is more a symbol of death, than e.g. safty. :)
266 2015-09-02 08:26:39 <gmaxwell> jonasschnelli: I think that sounds like a good start!
267 2015-09-02 08:26:45 <wumpus> (to be more serious: instead of white/black/grey, there could be more granular flags)
268 2015-09-02 08:27:04 <gmaxwell> ultravioletbind.
269 2015-09-02 08:27:26 <gmaxwell> plaidbind.
270 2015-09-02 08:27:29 <jonasschnelli> I have also though about an option to define the timeframe for the "target". 1 Day seems to be good. But not sure if it should be configurable.
271 2015-09-02 08:27:29 <wumpus> hehehe
272 2015-09-02 08:28:00 <wumpus> jonasschnelli: or maybe start with limiting archive blocks requests? that's the immediate DoS
273 2015-09-02 08:28:08 <wumpus> jonasschnelli: (don't know what is more practical)
274 2015-09-02 08:28:19 <gmaxwell> jonasschnelli: Maybe do the simplest things first, ... what tor does is quite throught through and might be something we should immitate if we want something more flexible.
275 2015-09-02 08:28:33 <wumpus> jonasschnelli: in Tor it;s configurable between day, month. But I'm fine with just fixing it at day.
276 2015-09-02 08:28:59 <jonasschnelli> Okay... i'll keep a hardcoded 60*60*24.
277 2015-09-02 08:29:07 <wumpus> can always be extended later
278 2015-09-02 08:29:12 <gmaxwell> interesting thing about day is when does the interval begin? we really don't want the whole network beginning its day at the same time.
279 2015-09-02 08:29:12 <jonasschnelli> Sure.
280 2015-09-02 08:29:38 <gmaxwell> otherwise capacity exhaustion synchronization is a potential issue.
281 2015-09-02 08:29:39 <wumpus> (we don't remember state between runs, so month may be giving promises it's unable to hold)
282 2015-09-02 08:29:42 <jonasschnelli> I think we don't reflect the real day... we just reflect the last 24h?
283 2015-09-02 08:30:18 <gmaxwell> jonasschnelli: okay, well I think we've talked about enough that its better to twiddle than discuss.
284 2015-09-02 08:30:20 <wumpus> well it needs to be a proper day, with a start and end interval
285 2015-09-02 08:30:28 <wumpus> but when.. yeah...
286 2015-09-02 08:30:39 <wumpus> maybe offset dependent on client start
287 2015-09-02 08:30:59 <gmaxwell> wumpus: for a month I think thats true, though for day-- less clear? does anyone have daily bandwidth usage limits?
288 2015-09-02 08:31:19 <jonasschnelli> probably no. But it easy to adjust.
289 2015-09-02 08:31:40 <wumpus> gmaxwell: I really don't know. Probably safe to assume some, somewhere in the world, do
290 2015-09-02 08:31:44 <jonasschnelli> But right... what would a user enter if he starts bitcoind at 15th of sept?
291 2015-09-02 08:32:03 <jonasschnelli> (and facing a monthly ISP limit)
292 2015-09-02 08:32:23 <gmaxwell> jonasschnelli: I believe tor would let it use the monthly limit in those 15 days.
293 2015-09-02 08:32:32 <wumpus> jonasschnelli: size of monthly quota / 35 or so to be safe?
294 2015-09-02 08:33:14 <jonasschnelli> This would mean to take the current day of the month into the calculation...
295 2015-09-02 08:33:16 <wumpus> jonasschnelli: the idea is to stay below the limit, keeping a safety margin makes sense, but it depends on the user, they can decide
296 2015-09-02 08:33:20 <wumpus> no!
297 2015-09-02 08:33:42 <gmaxwell> I think right now we should just do daily.
298 2015-09-02 08:33:42 <wumpus> the idea is just to count the usage in some timeframe, not more
299 2015-09-02 08:33:46 <wumpus> please don't overcomplicate it
300 2015-09-02 08:33:59 <jonasschnelli> Yes. That's also what i'm trying.
301 2015-09-02 08:34:04 <jonasschnelli> Okay. I'll focus on daily.
302 2015-09-02 08:34:18 <jonasschnelli> I'll also like to co-use the potential PR for bandwidth stats.
303 2015-09-02 08:34:24 <gmaxwell> there are extra issues that arise e.g. for monthly limits. Worrying about network synced caps becomes a much bigger concern on monthly limits.
304 2015-09-02 08:34:50 <gmaxwell> E.g. whole network runs low on capacity mid month becuase it gobbled it all up during some stupid attack the first few days.
305 2015-09-02 08:34:51 <wumpus> just make the month start day configurable
306 2015-09-02 08:35:04 <wumpus> it depends on what the user wants to account anyway
307 2015-09-02 08:36:07 <gmaxwell> wumpus: My thinking is that a longer accounting interval can always be represented as 1/Nth an N fold shorter one. Am I misthinking? The cost is a loss of allocation flexiblity, but that flexibility is generally a liability too.
308 2015-09-02 08:36:17 <jonasschnelli> to prevent DOS attacking nodes, a -pernodedaylimit could make sense.
309 2015-09-02 08:36:21 <wumpus> they'd usually set it to the billing period of the ISP. But don't have to.
310 2015-09-02 08:36:48 <wumpus> gmaxwell: yes it's basically just a computation help
311 2015-09-02 08:37:04 <gmaxwell> Yea, I was thinking earlier in the conversation that all this could also be applied per peer too but... I feared you would rightfully strangle me if I mentioned that.
312 2015-09-02 08:37:08 <wumpus> it's ok to just support daily limits for now
313 2015-09-02 08:37:15 <jonasschnelli> gmaxwell: haha
314 2015-09-02 08:38:09 <gmaxwell> so long as the consequence of someone abusively exausting your limit is inconsequential (beyond the loss of capacity) I think we don't have to worry too much about per peer.
315 2015-09-02 08:39:52 <wumpus> rightfully strangle, yeah. We can't reliably identify peers, so I don't see how per-peer could work. Of course you could switch to e.g. subnets... but sounds like a lot of complication for questionable gain
316 2015-09-02 08:40:05 <wumpus> and with IPv6 it's unclear what size to use
317 2015-09-02 08:41:13 <gmaxwell> wumpus: if some peer is stupidly exhausting your limit in an obviously dumb and abusive way, it's fine to punt it, sure it can reconnect. But thats a seperate issue which we can improve with things like hashcash.
318 2015-09-02 08:41:14 <wumpus> global limits are clearer
319 2015-09-02 08:41:25 <gmaxwell> But thats a deep rabbit hole in detecting that.
320 2015-09-02 08:41:39 <wumpus> gmaxwell: yes, I'd like to avoid anything that can be trivially bypassed by reconnecting
321 2015-09-02 08:42:18 <jonasschnelli> with the global limits an attacker knowing that you are using a limit could spam you and make sure, you chain cannot get in sync anymore.
322 2015-09-02 08:42:25 <wumpus> if your attack model relies on dumbness of attackers you're playinig an eternal cat and mouse game, even dumb attackers become smarter
323 2015-09-02 08:42:45 <gmaxwell> jonasschnelli: thats why I said above "so long as the consequence of someone abusively exausting your limit is inconsequential"
324 2015-09-02 08:43:05 <gmaxwell> if the limits cannot make you partition, then I don't care so much about dealing smartly with attacks.
325 2015-09-02 08:43:17 <jonasschnelli> Agreed.
326 2015-09-02 08:43:33 <gmaxwell> And I think we can achieve that, by switching to conservative modes, and having a small risk we go over (and don't promise the user we won't go over)
327 2015-09-02 08:45:04 <jonasschnelli> is there a definition of "historical block", GetBlockTime() < now-oneday?
328 2015-09-02 08:45:32 <gmaxwell> Is there a reason we have to not be very loose about this defintion? e.g. two weeks?
329 2015-09-02 08:45:55 <gmaxwell> Basically what we want to do in this case is punt IBD-ing hosts and no one else, I think.
330 2015-09-02 08:47:04 <gmaxwell> hm. well lets see, if your limit is a gigbyte, then letting someone download the last two weeks of blocks is not a great plan!
331 2015-09-02 08:48:09 <gmaxwell> bleh
332 2015-09-02 08:48:49 <wumpus> I think the idea behind 'not servicing historical blocks' is that you'll honor requests for the latest blocks in the spirit of relaying blocks, but not more
333 2015-09-02 08:49:11 <wumpus> but it's open for other interpretations, sure
334 2015-09-02 08:49:13 <gmaxwell> I'm sad about hitting non-IBDing hosts, not everyone leaves their nodes running all the time. making their startup times worse, kinda lame. But I don't see any way of being compatible with small limits without being fairly strict there.
335 2015-09-02 08:49:54 <wumpus> No need to be sad. I'm sure they'll find some other node to sync from.
336 2015-09-02 08:50:41 <gmaxwell> yes, I'd wild-ass-guess each limited node ends up being an additional 10 second delay.
337 2015-09-02 08:50:52 <gmaxwell> (just from the time it takes to successfully connect)
338 2015-09-02 08:51:09 <wumpus> assume they're connected to more nodes than you
339 2015-09-02 08:51:37 <phantomcircuit> gmaxwell, also upload is almost always counted differently than download
340 2015-09-02 08:52:26 <gmaxwell> Lets just assume every one of these limited nodes is someone who wouldn't be running a node at all.
341 2015-09-02 08:52:40 <gmaxwell> I am no longer sad. Go forth and be fruitful with awesome functionality. :)
342 2015-09-02 08:53:55 <wumpus> good! - yes it's also important to consider what the node operators wants, otherwise they may stop running a node completely, not just what's best for others, although it's obviously a difficult compromise.
343 2015-09-02 08:54:21 <phantomcircuit> lol
344 2015-09-02 09:14:33 <wumpus> oh re: the fuss about port 8333 being blocked, so it turns out the ISP in question was blocking *all* incoming TCP connections w/ some settings
345 2015-09-02 09:14:50 <wumpus> good, at least we don't need to start randomizing ports yet...
346 2015-09-02 09:15:14 <gmaxwell> also incoming, the person in question was fussing on reddit and saying they couldn't run bitcoin core at all, I tried to get them to clarify if they were really blocking _outbound_ too.
347 2015-09-02 09:17:21 <wumpus> (I've had similar issues where my modem was NATing instead of in bridge mode as expected, so incoming connections ended at the extra level of NATing, no ISP maliciousness involved just a borky modem)
348 2015-09-02 09:19:51 <jouke> My parents just received a new modem where, due to bugs, port forwarding isn't working.
349 2015-09-02 09:20:43 <wumpus> gmaxwell: port 8333 *outgoing* blocked is an interesting scenario too, and somewhat harder to handle. There are a few nodes on other ports but it may take longer to find them
350 2015-09-02 09:22:48 <s7r> one solution is onlynet=tor and use Tor, censorship won't matter in this case. Second, I could make my full nodes listen on port 80...
351 2015-09-02 09:23:19 <phantomcircuit> wumpus, i wonder what would happen if i was to spin up a bunch of nodes with random listening ports
352 2015-09-02 09:23:28 <phantomcircuit> or even nodes that listen on all the ports
353 2015-09-02 09:25:20 <wumpus> nodes that advertise too many ports on one IP could be seen as suspicious bythe seeder
354 2015-09-02 09:25:55 <s7r> it can also help Sybils
355 2015-09-02 09:26:06 <wumpus> though I love the bombastic idea of 'just listen on all ports'
356 2015-09-02 09:26:44 <s7r> in this case we could not advertise a certain port, and a peer trying to connect to us try 3- 5 random high ports?
357 2015-09-02 09:26:59 <wumpus> turns attention to how useless port filtering is
358 2015-09-02 09:27:24 <s7r> and how annoying port forwarding/nat is
359 2015-09-02 09:27:26 <s7r> :)
360 2015-09-02 09:29:03 <phantomcircuit> wumpus, shrug seeder should just decide it's a single peer
361 2015-09-02 09:29:07 <phantomcircuit> and ignore port numbers
362 2015-09-02 09:29:53 <midnightmagic> That would disable VPN'd nodes that use common gateways.
363 2015-09-02 09:37:02 <phantomcircuit> midnightmagic, shrug
364 2015-09-02 09:37:15 <phantomcircuit> the seeders goal is to get you a bunch of random connections
365 2015-09-02 09:44:39 <midnightmagic> right. "seeder" sorry.
366 2015-09-02 09:44:56 <midnightmagic> random choice between nodes on that IP might be good
367 2015-09-02 11:31:59 <harding> The following Bitcoin.org PR could use review from core devs; it adds a dozen new pages about Bitcoin Core to the website: https://github.com/bitcoin-dot-org/bitcoin.org/pull/1044
368 2015-09-02 11:33:19 <wumpus> harding: ok, will check
369 2015-09-02 11:40:54 <wumpus> harding: concept looks great to me
370 2015-09-02 11:42:46 <gmaxwell> harding: very cool
371 2015-09-02 11:44:06 <gmaxwell> I was talking to wumpus earlier about improving communication from the broader development community around Bitcoin Core and related to wider bitcoin ecosystem. When you piped up a moment ago my comment to wumpus was "Oh did harding go solve the communication problem for us?"
372 2015-09-02 11:44:19 <gmaxwell> (no, but this seems nice in general!)
373 2015-09-02 12:00:29 <harding> Thanks, both of you!
374 2015-09-02 12:02:53 <harding> gmaxwell: Saivann and I discussed maybe hosting a core development blog similar to what the Foundation did for awhile. Is that in line with what you were discussing?
375 2015-09-02 12:03:19 <Diablo-D3> that'd be interesting
376 2015-09-02 12:03:36 <Diablo-D3> harding: like, talk about major patches committed, bips, etc?
377 2015-09-02 12:04:14 <harding> Diablo-D3: yes. Post written or co-written by devs.
378 2015-09-02 12:04:23 <harding> Posts*
379 2015-09-02 12:04:32 <Diablo-D3> I'd sub that in my rss reader.
380 2015-09-02 12:23:52 <wumpus> harding: I like that idea, yes. I've seen similar things for other open source projects. It could both aggregrate e.g. existing blogs from developers, as well have posts of its own
381 2015-09-02 12:28:58 <wumpus> anyhow, the site is awesome
382 2015-09-02 12:32:02 <harding> wumpus: thank you; I'm really pleased you like it.
383 2015-09-02 12:34:35 <Diablo-D3> yeah a semi-planet for bitcoin
384 2015-09-02 12:34:48 <Diablo-D3> aggregates dev content plus non-dev oc
385 2015-09-02 14:17:50 <jonasschnelli> harding: #1044 looks really nice! Well done.
386 2015-09-02 14:39:04 <instagibbs> harding: you beast, nice update
387 2015-09-02 14:43:33 <wumpus> do other projects have this problem as well?
388 2015-09-02 14:49:16 <jonasschnelli> wumpus: btcdrak mentioned http://docs.travis-ci.com/user/migrating-from-legacy/
389 2015-09-02 14:53:46 <wumpus> thanks
390 2015-09-02 14:54:22 <wumpus> so that's no longer optional?
391 2015-09-02 14:58:01 <jonasschnelli> wumpus: haven't look at it. Sorry. But i looks like after a travis optimization.
392 2015-09-02 15:03:58 <harding> jonasschnelli, instagibbs: thanks!
393 2015-09-02 15:04:50 <cfields> wumpus: https://github.com/bitcoin/bitcoin/pull/6617
394 2015-09-02 15:05:29 <cfields> wumpus: sorry, i thought you'd see that today. mind trying to get that in asap?
395 2015-09-02 15:05:49 <cfields> jonasschnelli / btcdrak: the travis upgrade is on my short-list. they finally provide everything we need to move, it looks like
396 2015-09-02 15:13:20 <wumpus> does that mean a newer wine too? *ducks*
397 2015-09-02 15:13:44 <wumpus> cfields: oops! didn't see that one
398 2015-09-02 15:14:10 <cfields> wumpus: yea, i'm sure the wine will be newer
399 2015-09-02 15:14:33 <cfields> wumpus: btw, i guess i failed to mention, the reason i went on such a hunt for your wine problem is because i see the same thing lcoally
400 2015-09-02 15:14:45 <cfields> so i assumed it was wine in general
401 2015-09-02 15:15:10 <cfields> but you're probably correct in assuming that it's just fixed in a newer version, as mine's pretty old
402 2015-09-02 15:15:28 <wumpus> I don't see the same thing locally (w/ wine-1.7.50 as included in ubu 14.04). And jonasschnelli didn't see it on real-windows. Which version do you have?
403 2015-09-02 15:16:01 <cfields> i'm still on an ancient ubuntu install to help diagnose old lib issues, sec
404 2015-09-02 15:16:14 <cfields> wine-1.4.1
405 2015-09-02 15:17:06 <jonasschnelli> cfields: missed the recent discussion, but what issue exactly do you get over wine/libeventRPC?
406 2015-09-02 15:17:11 <wumpus> 1.4... omg :)
407 2015-09-02 15:18:11 <cfields> jonasschnelli: i believe there are 2 separate ones. wumpus are both (REUSE and http connect immediate fail) fixed with newer wine?
408 2015-09-02 15:18:54 <wumpus> not the reuse one afaik
409 2015-09-02 15:18:59 <jonasschnelli> I have ran the RPC tests on Window8 64bit (VMWare VM) multiple times... no errors.
410 2015-09-02 15:19:07 <wumpus> just the one that requires the rpctimeout=1 workaround
411 2015-09-02 15:19:18 <cfields> jonasschnelli: yes, that one won't be an issue on native windows
412 2015-09-02 15:19:21 <jonasschnelli> But not sure if REUSE would be cough thought.
413 2015-09-02 15:20:44 <cfields> jonasschnelli: the REUSE issue is a wine problem. It boils down to the fact that SO_REUSEADDR means something different in Unix than Windows. So the option isn't enabled for windows builds, though it's necessary at runtime when running on Unix, so wine does the wrong thing
414 2015-09-02 15:21:05 <wumpus> the reuse issue seems kind of fundamental, so I didn't even try without that workaround
415 2015-09-02 15:22:29 <wumpus> now hardcoded it in bitcoin-cli.cpp again, seeing if it passes with that
416 2015-09-02 15:25:44 <cfields> wumpus: i can do local tests for you if that would help
417 2015-09-02 15:26:20 <wumpus> if it does I'd be really confused. Although rpctimeout=1 also applies to the server if it's set in bitcoin.conf, so maybe that breaks it in another place.
418 2015-09-02 15:27:02 <cfields> wumpus: yea, setting the timeout too low in the server causes disconnects before full responses can be served
419 2015-09-02 15:27:09 <cfields> i hit that while testing
420 2015-09-02 15:27:25 <wumpus> right... *considers disabling the windows RPC tests until we can use a newer wine*
421 2015-09-02 15:28:28 <cfields> wumpus: i'll PR a quick wine bump, let's see how it goes
422 2015-09-02 15:28:35 <wumpus> hmm maybe a different option name for the client -rpcclienttimeout
423 2015-09-02 15:29:31 <wumpus> cfields: you can just upgrade wine on travis?
424 2015-09-02 15:29:52 <cfields> wumpus: i still don't really understand the issue there. i guess i need to experiment with non-windows to understand how it's supposed to work
425 2015-09-02 15:29:55 <cfields> wumpus: yea
426 2015-09-02 15:30:09 <wumpus> I don't understand the issue either. But I don't want to understand it. It works on windows and newer wine.
427 2015-09-02 15:30:21 <wumpus> I just want the tests to pass and move on :)
428 2015-09-02 15:30:37 <cfields> wumpus: it seems like it shouldn't, that's what confuses me
429 2015-09-02 15:30:46 <wumpus> no use chasing emulator rabbits
430 2015-09-02 15:30:58 <cfields> though i guess it's just because i've only observed one behavior, so i associate that with how it's supposed to function
431 2015-09-02 15:31:56 <wumpus> especially problems that apparantly have been solved
432 2015-09-02 15:33:32 <wumpus> but upgrading wine sounds awesome. We should have done that sooner :)
433 2015-09-02 15:34:23 <cfields> 2min
434 2015-09-02 15:35:03 <Tyson_> Tysontko
435 2015-09-02 15:46:05 <cfields> wumpus: https://github.com/bitcoin/bitcoin/pull/6620
436 2015-09-02 15:49:40 <wumpus> cfields: awesome
437 2015-09-02 15:50:17 <cfields> wumpus: might be worth trying 1.6 too, so we have an idea of min working version
438 2015-09-02 15:50:17 <jonasschnelli> cfields: fanquakes computer is definitively compromised :p -> #6619
439 2015-09-02 15:50:57 <cfields> jonasschnelli: hehe. that's worrisome if their source tarball is changing though
440 2015-09-02 15:51:34 <wumpus> wow. would be interested in comparing them
441 2015-09-02 15:52:42 <wumpus> it's on http not https so could be a MITM
442 2015-09-02 15:52:56 <wumpus> no need for his computer itself to be compromised
443 2015-09-02 15:56:23 <cfields> the checksum is checked during download and extraction, too. So assuming he actually built depends locally, there's no way it was just a corrupt download
444 2015-09-02 15:59:19 <wumpus> IIRC .gz includes its own CRC, so if the file is corrupted it'd error out during extraction
445 2015-09-02 16:02:29 <helo> correct. i just stopped making my own checksum companion files to detect gz/bz2 corruption, because it's not needed.
446 2015-09-02 16:30:44 <wumpus> cfields: getting that 400kb file in chromium too - interesting
447 2015-09-02 16:30:56 <cfields> wumpus: figured it out, posting now
448 2015-09-02 16:33:24 <cfields> ah, heh, i bet i know
449 2015-09-02 16:33:47 <wumpus> ohh it's simply ungzipped
450 2015-09-02 16:33:52 <cfields> i bet chrome sends gzip in its header, receives a gz stream, and decompresses on the fly
451 2015-09-02 16:36:12 <fanquake> cfields cheers for the info
452 2015-09-02 16:36:30 <fanquake> was going a little crazy there for a minute
453 2015-09-02 16:36:44 <wumpus> cfields: yes - think I've seen this before, though in a different context
454 2015-09-02 16:37:52 <cfields> that's odd. I wonder if it's being served as an unexpected mime type.
455 2015-09-02 16:39:44 <wumpus> probably a server misconfiguration. Good to hear it's not anything suspicious at least
456 2015-09-02 16:45:28 <cfields> wumpus: wine bump is all green. Want to pull that commit on top of your libevent PR and see if you can drop some workarounds?
457 2015-09-02 16:46:17 <cfields> whoops, i was too slow :)
458 2015-09-02 16:59:39 <wumpus> cfields: heh yes, I just merged it
459 2015-09-02 17:00:00 <wumpus> and dropped the rpctimeout on the libevent pull
460 2015-09-02 17:01:50 <wumpus> +workaround
461 2015-09-02 17:17:30 <wumpus> win still failing with the same problem :/
462 2015-09-02 17:23:59 <cfields> wumpus: it still seems like it should be unrelated to win to me. I'll repro the problem as i was before, then switch to a linux build and see if it still happens
463 2015-09-02 17:25:00 <wumpus> well it is related to wine, not win. jonasschnelli succesfully executed the tests on windows
464 2015-09-02 17:25:49 <wumpus> and I can't reproduce it locally on wine either
465 2015-09-02 17:26:08 <wumpus> I really wonder what is different in travis...
466 2015-09-02 17:26:44 <cfields> (and mine :)
467 2015-09-02 17:28:40 <drazisil> What would trigger the following error from bitcoind: "EXCEPTION: NSt8ios_base7failureE non-canonical ReadCompactSize() bitcoin in ProcessMessages()"
468 2015-09-02 17:30:36 <wumpus> a packet containing a serialized structure with a non-canonical size
469 2015-09-02 17:31:28 <drazisil> wumpus: so it's just a one-time thing, or something that requires action?
470 2015-09-02 17:32:48 <drazisil> What IS NSt8ios, I'm having a hard time getting any info on it.
471 2015-09-02 17:33:17 <wumpus> it's not problematic, just misformatted data sent over the network. Agree the message looks more dangerous than it should but ok...
472 2015-09-02 17:33:31 <drazisil> ok, thanks :)
473 2015-09-02 17:34:27 <wumpus> it's a "mangled" (converted to acceptable-in-c characters) c++ symbol
474 2015-09-02 17:34:28 <nwilcox> Is NSt8ios_base7failureE the result of C++ mangling into a linkable name?
475 2015-09-02 17:34:36 <wumpus> yes
476 2015-09-02 17:35:23 <nwilcox> Which code logs that exception? Is it using some kind of runtime inspection to look at link symbols?
477 2015-09-02 17:36:15 <nwilcox> More general question: Why is the name in the log the result of C++ symbol mangling, rather than the original C++ name?
478 2015-09-02 17:36:16 <cfields> I think that's just an exception.what()
479 2015-09-02 17:36:17 <wumpus> PrintExceptionContinue(&e, "ProcessMessages()");
480 2015-09-02 17:36:26 <nwilcox> Ok, thanks.
481 2015-09-02 17:36:53 <wumpus> typeid(*pex).name()
482 2015-09-02 17:37:03 <wumpus> so, again, yes
483 2015-09-02 17:40:52 <wumpus> it's undefined whether that returns a mangled or original C++ name, it just needs to return some identifier for the type
484 2015-09-02 17:41:18 <wumpus> (I think even unique)
485 2015-09-02 17:43:51 <wumpus> fixed a resource leak in bitcoin-cli in case evhttp_make_request fails, although I doubt this is what causes the issue: https://github.com/laanwj/bitcoin/commit/5c0c9895be3b5ab78ad3997fb101fbd8242a6824
486 2015-09-02 17:50:55 <cfields> ah, good catch
487 2015-09-02 17:51:08 <cfields> but no, it doesn't hit that condition. i've got a printf stuck there
488 2015-09-02 17:51:33 <cfields> i'm trying to get winedbg working with gdb so i can see what bitcoin-cli.exe is doing
489 2015-09-02 17:51:35 <wumpus> that condition does get hit here when using rpcwait w/closed port
490 2015-09-02 17:51:42 <cfields> it's spinning at 100% cpu
491 2015-09-02 17:52:34 <cfields> nope, not here (with busted wine)
492 2015-09-02 17:54:31 <cfields> aha, i might see it
493 2015-09-02 17:55:13 <wumpus> great
494 2015-09-02 17:55:45 <Newyorkadam> speaking of bitcoin-cli, Iâm getting a weird error: âbitcoin-cli getrawtransaction 0437cd7f8525ceed2324359c2d0ba26006d92d856a9c20fa0241106ee5a597c9â
495 2015-09-02 17:55:52 <Newyorkadam> âerror: {"code":-5,"message":"No information available about transactionâ}â
496 2015-09-02 17:56:25 <wumpus> how is that a weird error?
497 2015-09-02 17:56:49 <Newyorkadam> the tx exists
498 2015-09-02 17:56:52 <Newyorkadam> it works for other txs
499 2015-09-02 17:58:10 <wumpus> then it's not in your mempool
500 2015-09-02 17:58:32 <wumpus> what are you trying to do? if you want to get a transaction from your wallet use 'gettransaction'
501 2015-09-02 17:58:34 <Newyorkadam> why would that be? Iâm not caught up yet but this is the coinbase tx from block 9
502 2015-09-02 17:58:55 <Newyorkadam> no, Iâm trying to decode the tx of each blockâs coinbase tx
503 2015-09-02 17:58:55 <wumpus> coinbase txes certainly don't enter the mempool
504 2015-09-02 17:59:05 <Newyorkadam> it works for other blocks though, just not block 10?
505 2015-09-02 17:59:21 <wumpus> do you have txindex enabled?
506 2015-09-02 18:02:06 <wumpus> anyhow: wine 1.7.18 (on travis) is faliing, wine 1.7.50 (here) is working
507 2015-09-02 18:02:14 <Newyorkadam> wumpus: just enabled it,
508 2015-09-02 18:02:33 <wumpus> so it is likely that some change inbetween those versions fixed the issue
509 2015-09-02 19:01:07 <jeremyrubin> FYI x-post from dev list, if you want to present at Scaling Bitcoin, you need to send in a proposal ASAP. Some people may have missed the process details.
510 2015-09-02 19:03:16 <jeremyrubin> Now put your hands down on the keyboard and send in your proposal ;)
511 2015-09-02 19:07:31 <zooko> âº
512 2015-09-02 19:15:28 <Luke-Jr> jeremyrubin: what is already being covered?
513 2015-09-02 19:50:24 <jeremyrubin> Luke-Jr: There is a broad range of proposals so far.
514 2015-09-02 19:50:44 <jeremyrubin> Is there something you were considering?
515 2015-09-02 21:10:50 <amaclin> i've tried to break the chain of spam transactions with malleability https://tradeblock.com/bitcoin/tx/f95b2809285f1299a85f6dc5a3340b68da61085e1e476f0bb837f905e4528d92 without success. Spamer dont have a precompiled list of transactions - he creates them on the fly. So he continued his chain from malled tx
516 2015-09-02 21:26:17 <Luke-Jr> jeremyrubin: possibly, but I don't want to spend time thinking about a redundant presentation