Official eMule-Board: The Crumbs Project - Official eMule-Board

Jump to content


  • (4 Pages)
  • +
  • 1
  • 2
  • 3
  • Last »

The Crumbs Project Aka Sub Chunk Transfer

#1 User is offline   netfinity 

  • Master of WARP
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 1658
  • Joined: 23-April 04

Posted 16 December 2006 - 04:43 PM

Background:
About a year ago I (netfinity) started working on an implementation to exchange incomplete parts between clients by advertising completed AICH blocks when there was no other parts the remote client needed. This feature was later picked up and polished by the NEO mod. However, I always felt the feature too much of a compromise. So, I started looking at how the hybrids do it and decided that it would be a good foundation for my project.

Below I present the findings about the protocol that have been done so far. With reservation for errors, ofcourse!

Crumbs Protocol Revision 1 [pr=1]
In the Crumbs protocol, the file is divided into 475kB chunks instead of the normal 9500kB chunks. This is reflected on the part status vector that will become ~20 times the size due to this.

OPCODES said:

OP_CRUMBSETREQ - 0x69 <File Hash> [Handling of this op-code is required]
This opcode is used to request a Crumb hashset from a client.

OP_CRUMBSETANS - 0x68 <File Hash>[<has part hash set>[<Part hash set>]]<has crumb hash set>[<Crumb Hash set>]
This is the reply to OP_CRUMBSETREQ and contains a list of hashes for each Crumb in the file. A Crumb hash is the first 8 bytes in the MD4 hash of the Crumb. It also contains the part hash set if the file has more than one part.

OP_CRUMBCOMPLETE - 0x6A <File Hash><Crumb Index> [This op-code can be ignored, but is not recommended]
This opcode is sent to all connected clients downloading the specific file when completing a Crumb. I.e its used to advertise that a new Crumb has become available.

Hello TAGS said:

CT_PROTOCOLREVISION - "pr" = <Crumbs Protocol Revision>
This tag is often called the Horde tag, but I haven't noticed that the tag has any relation to the Horde protocol. Instead it is used to inform the remote client that we support part vectors using Crumbs as the chunk size.


Unfortunatly the Crumbs protocol uses MD4 hashes and some people are greatly concerned about it's strength against pre-image attacks. Therefore I suggest an additional opcode to detect such attacks.

Extended OPCODES said:

OP_HASHSETSHASH - 0xNN <File Hash><SHA Hash of File Hashset><SHA Hash of Crumb Hashset>
This opcode would be sent when a remote client ask for the specified file and we haven't sent this info before. The hashed hashsets are the ones we currently beleive are true. The benefit of knowing every hashset, trusted by other clients, is that we can with quite certanity know that we can trust a hashset if no other hashset have been published. I.e if there is one good source, there must be atleast one good hashset.


SCT Protocol Revision 2 [pr=2]
As eMule is not using crumb hashes and already have the AICH hashing mechanism, a modified version of the protocol is in place.

In this version, all the above in Crumbs Protocol Revision 1 has to be supported. What differs is that a new set of chunk sizes can be used. The new sizes are 180kB, 360kB, 720kB, 1440kB, 2880kB, 5760kB and "the entire file". The last is a part status vector with only one bit which is set to 0 and is used to signify that the source doesn't yet have any parts to share. Note, that all chunks except for "the entire file" are truncated at part boundary as in AICH. The op-codes in revision 1 is not used when using these chunk sizes.




I hereby declare this the official thread for the implentation of smaller chunks in eMule!

/netfinity

This post has been edited by netfinity: 24 August 2013 - 06:02 PM

eMule v0.50a [NetF WARP v0.3a]
- Compiled for 32 and 64 bit Windows versions
- Optimized for fast (100Mbit/s) Internet connections
- Faster file completion via Dynamic Block Requests and dropping of stalling sources
- Faster searching via KAD with equal or reduced overhead
- Less GUI lockups through multi-threaded disk IO operations
- VIP "Payback" queue
- Fakealyzer (helps you chosing the right files)
- Quality Of Service to keep eMule from disturbing VoIP and other important applications (Vista/7/8 only!)
2

#2 User is offline   Interrupture 

  • Member
  • PipPip
  • Group: Members
  • Posts: 27
  • Joined: 14-December 06

Posted 16 December 2006 - 07:03 PM

Hi netfinity. :)

First off I'd like to say I wholeheartedly agree with what you're trying to do in principle. Well done.

I think this is definitely a step in the right direction.

However, why introduce yet another variation on a protocol when two good ones already exist?

The 475 kB crumbs sounds similar to the size of the crumbs used by edonkey/Overnet?

But, why not make the crumb size either 180 kB to be compatible with the current AICH protocol or alternatively 1 kB, to be compatible with TTH?

Also, according to many docs I've read recently there are definitely serious flaws with MD4, so IMHO I would say stick to SHA-1, which both TTH and AICH use anyway.
0

#3 User is offline   netfinity 

  • Master of WARP
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 1658
  • Joined: 23-April 04

Posted 16 December 2006 - 08:17 PM

@Interrupture
The plan is to be completly compatible with the donkeys, so that I can use them as test clients. Also, a broad user base from the beginning is a great plus.

The reason to not use AICH is because the large overhead it would cause. TTH would maybe be interresting as a replacement to AICH recovery system. If crumbs are too small, it would take enourmous computer resources to keep track of them. In contrast to the part recovery protocol AICH, Sub Chunk (Crumbs) protocol need to exchange data with each client to function.

I've have decided that hash collisions in the file data will not be that likely due to the complexity. Instead attackers will more likely try to find alternate hashsets that generate the same filehash and spam with that. So, what we need is to detect the precense of these kind of attacks. Once we done that we can start eliminating hashsets that we are certain to be invalid. (Done by cross checking with different hashing methods and compare with the actual downloaded data)
eMule v0.50a [NetF WARP v0.3a]
- Compiled for 32 and 64 bit Windows versions
- Optimized for fast (100Mbit/s) Internet connections
- Faster file completion via Dynamic Block Requests and dropping of stalling sources
- Faster searching via KAD with equal or reduced overhead
- Less GUI lockups through multi-threaded disk IO operations
- VIP "Payback" queue
- Fakealyzer (helps you chosing the right files)
- Quality Of Service to keep eMule from disturbing VoIP and other important applications (Vista/7/8 only!)
0

#4 User is offline   DavidXanatos 

  • Neo Dev
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 1468
  • Joined: 23-April 04

Posted 16 December 2006 - 08:32 PM

Interresting,

but aren't the donkys dying out?
from what i know metamachine have finaly stopped the developement.

A donky crunb is 2.6 (~3) times larger than an AICH block, I don't think this is a big enough difference to be a reason to introduce a new 3rd hash method.

If we are realy going to implement a 3rd hash method we should do it the right way and implement variable part size so that small files can be devided in smaller parts and biger files may have parts even giber than the default part size. As far as I know BT does this and it seems to work as a charm.

btw: SHA-1 is not longer considdered secure at least the shorter versions, so we should go for a realy some safe hash as Whirlpool

David

This post has been edited by DavidXanatos: 16 December 2006 - 08:45 PM

NeoLoader is a new file sharing client, supporting ed2k/eMule, Bittorent and one click hosters,
it is the first client to be able to download form multiple networks the same file.
NL provides the first fully decentralized scalable torrent and DDL keyword search,
it implements an own novel anonymous file sharing network, providing anonymity and deniability to its users,
as well as many other new features.
It is written in C++ with Qt and is available for Windows, Linux and MacOS.
0

#5 User is offline   leuk_he 

  • MorphXT team.
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 5975
  • Joined: 11-August 04

Posted 16 December 2006 - 08:44 PM

View PostDavidXanatos, on Dec 16 2006, 09:32 PM, said:

A donky crunb is 2.6 (~3) times larger than an AICH block, I don't think this is a big enough difference to be a reason to introduce a new 3rd hash method.

If we are realy going to implement a 3rd hash method we should do it the right way and implement variable part size so that small files can be devided in smaller parts and biger files may have parts even giber than the default part size. As far as I know BT does this and it seems to work as a charm.


Variable part size? Yeah, if you need addional hashes then variable part size would be great. That would scale better for larger files. In bittorrent it is said that the optimal file size divied the file in 1000 chunks. (except for very small files...). That could also solve the probelm of the Extreme hash set sizes for large (> 10 GB) files.

(but making up FR is easy :-k building it is..... well you know)

Quote

btw: SHA-1 is not longer considdered secure at least the shorter versions, so we should go for a realy some safe hash as Whirlpool

had to look it up:

"WHIRLPOOL is a hash function designed by Vincent Rijmen and Paulo S. L. M. Barreto that operates on messages less than 2256 bits in length, and produces a message digest of 512 bits. "


:confused: 512 bits & overhead.
Download the MorphXT emule mod here: eMule Morph mod

Trouble connecting to a server? Use kad and /or refresh your server list
Strange search results? Check for fake servers! Or download morph, enable obfuscated server required, and far less fake server seen.

Looking for morphXT translators. If you want to translate the morph strings please come here (you only need to be able to write, no coding required. ) Covered now: cn,pt(br),it,es_t,fr.,pl Update needed:de,nl
-Morph FAQ [English wiki]--Het grote emule topic deel 13 [Nederlands]
if you want to send a message i will tell you to open op a topic in the forum. Other forum lurkers might be helped as well.
0

#6 User is offline   DavidXanatos 

  • Neo Dev
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 1468
  • Joined: 23-April 04

Posted 16 December 2006 - 08:47 PM

We can use AICH and define our variable crumbs size as n*block_size where n >= 1, this way we can save a lot space in the crumb status bit field.

@leuk_he
We must not use the entier 512 bit of WHIRLPOOL , we can schrink it to 256 by L_HALF XOR R_HALF, or to 128 in the same way, or we can cut it an a desiert langth, I actualy dont know what of the 2 schrinking methods the xor or the cut is more secure.

David

This post has been edited by DavidXanatos: 16 December 2006 - 08:48 PM

NeoLoader is a new file sharing client, supporting ed2k/eMule, Bittorent and one click hosters,
it is the first client to be able to download form multiple networks the same file.
NL provides the first fully decentralized scalable torrent and DDL keyword search,
it implements an own novel anonymous file sharing network, providing anonymity and deniability to its users,
as well as many other new features.
It is written in C++ with Qt and is available for Windows, Linux and MacOS.
0

#7 User is offline   leuk_he 

  • MorphXT team.
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 5975
  • Joined: 11-August 04

Posted 16 December 2006 - 09:08 PM

View PostDavidXanatos, on Dec 16 2006, 09:47 PM, said:

We can use AICH and define our variable crumbs size as n*block_size where n >= 1, this way we can save a lot space in the crumb status bit field.

@leuk_he
We must not use the entier 512 bit of WHIRLPOOL , we can schrink it to 256 by L_HALF XOR R_HALF, or to 128 in the same way, or we can cut it an a desiert langth, I actualy dont know what of the 2 schrinking methods the xor or the cut is more secure.

David


Sorry, BAD idea. taking out a part of the resulting hash may make if much less secure, unless you really REALLY know what you are doing. Why not take a tested and proved hash function instead of trying to build something yourself.

A very short table is here:

http://en.wikipedia...._hash_functions

(wikipedia, not sure if it is reliable !).
I do not think sha-1 is very unreliable, what i understood.

And think about the following: And isn't is possible to use the AICH root hash? AICH is a hashing tree after all. YOu would not need to use the lowest level of hashes i think, you can also use intermediate hashed? (read SHAHashSet.h). And you can still use the protocol form netfinity i think but then with a pr=3 (or an other value, or mod protocol, or whatever... ).
Download the MorphXT emule mod here: eMule Morph mod

Trouble connecting to a server? Use kad and /or refresh your server list
Strange search results? Check for fake servers! Or download morph, enable obfuscated server required, and far less fake server seen.

Looking for morphXT translators. If you want to translate the morph strings please come here (you only need to be able to write, no coding required. ) Covered now: cn,pt(br),it,es_t,fr.,pl Update needed:de,nl
-Morph FAQ [English wiki]--Het grote emule topic deel 13 [Nederlands]
if you want to send a message i will tell you to open op a topic in the forum. Other forum lurkers might be helped as well.
0

#8 User is offline   netfinity 

  • Master of WARP
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 1658
  • Joined: 23-April 04

Posted 16 December 2006 - 09:26 PM

Yeah, dynamic size would be neat.

To scale the part status vector shouldn't be that hard, but the hashing stuff is a little bit trickier. In order to share a crumb we need to know that it's not completly garbage, so we need to hash every crumb and verify it. This means that we need to know every leaf of a hashtree. Just keeping the leaves of an AICH hash tree for a 512GB file would take about 60MB. On disk that isn't much of an issue, but keeping it in RAM is!
eMule v0.50a [NetF WARP v0.3a]
- Compiled for 32 and 64 bit Windows versions
- Optimized for fast (100Mbit/s) Internet connections
- Faster file completion via Dynamic Block Requests and dropping of stalling sources
- Faster searching via KAD with equal or reduced overhead
- Less GUI lockups through multi-threaded disk IO operations
- VIP "Payback" queue
- Fakealyzer (helps you chosing the right files)
- Quality Of Service to keep eMule from disturbing VoIP and other important applications (Vista/7/8 only!)
0

#9 User is offline   tHeWiZaRdOfDoS 

  • Man, what a bunch of jokers...
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 5630
  • Joined: 28-December 02

Posted 17 December 2006 - 12:40 AM

IIRC SHA1 *can* be broken but it needs massive resource usage thus it isn't a "real time"-danger :lol:

I also think we should use methods which are already in use, tested and proven to be working good, introducing something new (and not widely tested and acknowleged) such as Whirlpool, meaning a hashing algorithm of unknown security, isn't a good idea IMHO.


The questions that seem important to me are:
Do you want to keep the crumb size at all costs?
Or would you agree to use a different system, just like BT for example with variable blocksize of 2^n (with n being a virtually abritrary number)?
Of course it *would* be nice to have some userbase to start testing but it'd be even nicer to have a fully flexible, expandable system in the end.
Another advantage of the latter proposal is that there are some implementations "ready to use" in various open source BT clients... (though I currently don't know what hashing type they use... I guess it's SHA1?)
0

#10 User is offline   DavidXanatos 

  • Neo Dev
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 1468
  • Joined: 23-April 04

Posted 17 December 2006 - 08:33 AM

Hmm....
If we would use the Hashing method of BT 1:1 this would be a big stepp in the direction of having BT support implemented. I like the idea.

David
NeoLoader is a new file sharing client, supporting ed2k/eMule, Bittorent and one click hosters,
it is the first client to be able to download form multiple networks the same file.
NL provides the first fully decentralized scalable torrent and DDL keyword search,
it implements an own novel anonymous file sharing network, providing anonymity and deniability to its users,
as well as many other new features.
It is written in C++ with Qt and is available for Windows, Linux and MacOS.
0

#11 User is offline   eklmn 

  • Splendid Member
  • PipPipPipPip
  • Group: Member_D
  • Posts: 232
  • Joined: 12-January 03

Posted 17 December 2006 - 11:04 AM

Hi netfinity,

it's good idea to discuss about new protocol. :) first of all i have to sorry, but the description of crumbs protocoll (CP) is not enough to do any decision about this feature due to lack of some information. So i would like to know following:
1) How do you plan to use this CP both client supports the it? Parallel to eMule-protocol or it will be a replacement of original protocol?
2) How unconnected clients will be informed about new crumbs?

@all:
Regarding to compatibility to Hybrids & BT ... I don't like it & think that is a bad idea, because:
a) the part of Hybrids in network is less than 1% & like already said DavidXanatos they are dying out.
B) the compatibility to BT will lead to suppot of BT network & as result those client will be lost for ed2k-network (the best example for this is "Shareaza" client).
0

#12 User is offline   Andu 

  • Morph Team
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 13015
  • Joined: 04-December 02

Posted 17 December 2006 - 02:50 PM

I agree. Going down the BT path could lead to serious issues for the ed2k network. The main client of ed2k should remain dedicated. After all what is the use of introducing a feature that will hurt the network in the middle run.
Three Rings for the Elven-kings under the sky,
Seven for the Dwarf-lords in their halls of stone,
Nine for Mortal Men doomed to die,
One for the Dark Lord on his dark throne
In the Land of Mordor where the Shadows lie.
One Ring to rule them all, One Ring to find them,
One Ring to bring them all and in the darkness bind them
In the Land of Mordor where the Shadows lie.


Dark Lord of the Forum


Morph your Mule

Need a little help with your MorphXT? Click here

0

#13 User is offline   tHeWiZaRdOfDoS 

  • Man, what a bunch of jokers...
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 5630
  • Joined: 28-December 02

Posted 17 December 2006 - 03:22 PM

I totally disagree!
We talk about a protocol enhancement not a change - this is totally unrelated and there won't be any issues ever except the work for the modders to implement and handle that system which is of no concern for the end-user.
IF a client supports BT then it won't be lost for eD2K - how did u come to such a assumption!? BTW Shareaza is the worst example ever... supporting multiple networks but none working properly... we would (if ever) simply ADD BT functionality but keep full eD2K compatibility.
0

#14 User is offline   DavidXanatos 

  • Neo Dev
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 1468
  • Joined: 23-April 04

Posted 17 December 2006 - 03:34 PM

We actualy arn't talking about implementing BT support, BT support is much more than only having a compatible hash system. So even if we would have exactly the same hash system as BT it still would be a far way to got to have realy BT support.

Implementing BT's hash method with variable part sizes actuyly wouldn't have any down sides for the ed2k network I think.
And it would extend the functionality for the client as for example emule could generate torrent files and use tham instad of links if anyone wants to.

David
NeoLoader is a new file sharing client, supporting ed2k/eMule, Bittorent and one click hosters,
it is the first client to be able to download form multiple networks the same file.
NL provides the first fully decentralized scalable torrent and DDL keyword search,
it implements an own novel anonymous file sharing network, providing anonymity and deniability to its users,
as well as many other new features.
It is written in C++ with Qt and is available for Windows, Linux and MacOS.
0

#15 User is offline   tHeWiZaRdOfDoS 

  • Man, what a bunch of jokers...
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 5630
  • Joined: 28-December 02

Posted 17 December 2006 - 03:55 PM

If we'd agree on that then one problem remains: we should then use a fixed "crumb" size for eD2K purposes because otherwise a client might download from someone with 16kB and someone with 128kB block size - he can't track different blocksizes (RAM usage...).
A proposal would be to use the "BT standards" (I did not read them in detail, maybe they have different system actually in use) of having <1000 "crumbs" in each file, i.e. for files with a size of
<= 2000kB that would mean to use n = 1
<= 4000kB that would mean to use n = 2
<= 8000kB that would mean to use n = 3
....
<= 1048576kB that would mean to use n = 10

and so on.
0

#16 User is offline   Interrupture 

  • Member
  • PipPip
  • Group: Members
  • Posts: 27
  • Joined: 14-December 06

Posted 17 December 2006 - 03:58 PM

BT uses the SHA-1 hash to verify its pieces and also to hash the whole 'info' dictionary, to give an info_hash value. That's the value you see published on torrent sites.

The info dictionary contains the piece size, along with the concatenated string of piece hashes, the 'private' flag (used to signify that the torrent shouldn't be shared via DHT for example) and info on the file structure.

BT relies very heavily on implicit trust for file verifcation. I mean there's NO extra tree-hashing for file verification etc. Of course, the difference between ed2k 'links' and .torrent 'files' for one, is that the .torrent contains the SHA-1 piece hashes already, whereas with ed2k we have to request the hash set then decide whether to trust it or not, assuming that it wasn't included in the initial link.

@Netfinity.

The beauty of the tree hash structure is that you DON'T need ALL the nodes to verify one leaf of it. You just need the corresponding siblings at each level of the tree in order to re-calculate (and hence verify) the root hash and in doing so verify the leaf hash itself.

Using AICH as a base example, if we take the very first block in the very first part, then to verify that it's genuine all we need is the hash from block 2, along with the sibling hashes from each tree level above that whereupon the root hash can be calculated. If it agrees, then we keep the block and it can be immediately reshared on the network, or reject it if it fails.

This means only a few hundred bytes are needed by the client for verification, but then the problem is how much info does the full source need to keep in memory at any one time to fulfill requests?

Yes, with your 512 GB example and AICH we need a lot of memory but a 10 GB file or so would only need about 1.2 MB which is much more reasonable? How many 512 GB files do you see being shared?

But an alternative could be that the full sources always have the tree hashes for the parts available in memory (about 20 kB for a 10 GB file), then make the 53 block part hashes available by calculation on the fly to any client requesting them?

This is a compromise on both sides. We substitute memory, for CPU time, to the full source. The client can then use the 53 block hashes to verify ANY subsequent block received for that part. In addition the block hashes themselves along with the tree part hashes enable verfication all the way up to the root hash to ensure that nothing's going astray.

So, this isn't a truly pseudo-random block sharing scheme, but and the big BUT is that clients don't have to wait for whole 9.28 MB parts to be available but will instead very soon have lots of 180 kB blocks available to be shared/re-shared.

In other words the scheme above makes AICH integral to the crumb sharing idea NOT an 'additional extra' in the event of corruption.

All it would appear to add is an extra 1.5kB of overhead for each new part worked on and indeed doesn't break the current system either.

Like obfuscation, you can just enable it's use between compatible clients without breaking anything otherwise.

This then starts to approach BT's efficency with data being shared MUCH more quickly than before? There should surely be less queuing etc. Perhaps make a rotating 8 block queue or whatever?

I'm not suggesting that AICH is an ultimate block size or anything either. Obviously the tree hash can be applied to ANY block size, but 180 kB is a whole let better than 9.28 MB!

This post has been edited by Interrupture: 18 December 2006 - 04:05 PM

0

#17 User is offline   Interrupture 

  • Member
  • PipPip
  • Group: Members
  • Posts: 27
  • Joined: 14-December 06

Posted 17 December 2006 - 04:29 PM

View PostDavidXanatos, on Dec 17 2006, 03:34 PM, said:

We actualy arn't talking about implementing BT support, BT support is much more than only having a compatible hash system. So even if we would have exactly the same hash system as BT it still would be a far way to got to have realy BT support.

Implementing BT's hash method with variable part sizes actuyly wouldn't have any down sides for the ed2k network I think.
And it would extend the functionality for the client as for example emule could generate torrent files and use tham instad of links if anyone wants to.

David


There's much less problem with single files than there is with multiple-file torrents.

E.g the SHA-1 hashes treat the multiple files referred to in the .torrent as concatenated with no gaps, so a piece hash could well be the hash from some bytes from the end of file X with some bytes from the beginning of file Y.

Ok that's still not a problem if you have sources on ed2k for file X AND file Y then you can still verify that 'piece'.

Another difference is that there's no overall hash for each individual file contained in the .torrent in the current BT spec. Well, actually there is an optional MD5 hash but it is only optional and I dunno how many torrent creators use that.

Also, with BT as it stands when you load up a .torrent you KNOW that everyone in the swarm wants to either give you or get some data for that particular torrent.

If I'd gotten the .torrent from ed2k I'd be forever wondering if the gits in the swarm were at the same time sharing 10,000 other files at the same time! i.e. the current situation with eMule and those full source guys that NEVER send you a single byte even if you queue for weeks! :-1:

This post has been edited by Interrupture: 17 December 2006 - 04:37 PM

0

#18 User is offline   mkoorn 

  • Magnificent Member
  • PipPipPipPipPipPip
  • Group: Members
  • Posts: 336
  • Joined: 07-January 06

Posted 17 December 2006 - 04:41 PM

Let me be the one to ask the stupid question:
why do we need to share incomplete chunks?
Rephrased: what is the problem existing with chunks we need to solve?
mkoorn
0

#19 User is offline   Interrupture 

  • Member
  • PipPip
  • Group: Members
  • Posts: 27
  • Joined: 14-December 06

Posted 17 December 2006 - 04:52 PM

View Postmkoorn, on Dec 17 2006, 04:41 PM, said:

Let me be the one to ask the stupid question:
why do we need to share incomplete chunks?
Rephrased: what is the problem existing with chunks we need to solve?


Time! Which in turn = queuing.

It would take 1/53rd time to make a 180 kB block available on the network compared to the time it would take to make 1 part available.

Or ask this question:

Why do we share whole chunks/parts rather than the complete file in ONE big chunk?

If you continue that argument, then it makes even 180 kB look big doesn't it?

This post has been edited by Interrupture: 17 December 2006 - 04:52 PM

0

#20 User is offline   mkoorn 

  • Magnificent Member
  • PipPipPipPipPipPip
  • Group: Members
  • Posts: 336
  • Joined: 07-January 06

Posted 17 December 2006 - 05:42 PM

View PostInterrupture, on Dec 17 2006, 05:52 PM, said:

View Postmkoorn, on Dec 17 2006, 04:41 PM, said:

Let me be the one to ask the stupid question:
why do we need to share incomplete chunks?
Rephrased: what is the problem existing with chunks we need to solve?


Time! Which in turn = queuing.

It would take 1/53rd time to make a 180 kB block available on the network compared to the time it would take to make 1 part available.

Or ask this question:

Why do we share whole chunks/parts rather than the complete file in ONE big chunk?

If you continue that argument, then it makes even 180 kB look big doesn't it?

Between slotfocus and the new chunck selection it would take an average chunck about 2 or 3 minutes to complete. Is this too long for your taste?
mkoorn
0

  • Member Options

  • (4 Pages)
  • +
  • 1
  • 2
  • 3
  • Last »

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users