Official eMule-Board: Faster End Of Part With Multiple Sources? - Official eMule-Board

Jump to content


Page 1 of 1

Faster End Of Part With Multiple Sources? code snippets and background.

#1 User is offline   BlueSonicBoy 

  • Magnificent Member
  • PipPipPipPipPipPip
  • Group: Members
  • Posts: 396
  • Joined: 26-September 04

Posted 15 November 2005 - 04:10 AM

The problem

The problem is well known, at the end of a part, blocks that have been requested of a slower client can become locked. A faster client may have them and be ready to transfer them, but can't be asked because they have already be 'allocated to'/'requested from' the slower source.

Some points of reference:

Slow Download Ending Solution


Faster 'endgame', Increase the speed at end of downloads (Dazzle)

Dynamic Block Requests, A way to significantly speed up eMule
(Netfinity)


To partly deal with this issue in 46c the official eMule has:

void CUpDownClient::SendBlockRequests() said:

    // prevent locking of too many blocks when we are on a slow (probably standby/trickle) slot
    int blockCount = 3;
    if(IsEmuleClient() && m_byCompatibleClient==0 && reqfile->GetFileSize()-reqfile->GetCompletedSize() <= PARTSIZE*4) {
      // if there's less than two chunks left, request fewer blocks for
        // slow downloads, so they don't lock blocks from faster clients.
        // Only trust eMule clients to be able to handle less blocks than three

        if(GetDownloadDatarate() < 600) {
            blockCount = 1;
        } else if(GetDownloadDatarate() < 1200) {
            blockCount = 2;
        }
    }


Of this solution with reference to the alternatives ZZ wrote:

zz, on Oct 22 2005, 10:03 AM, said:

I don't remember the particulars about Netfinity's patch; but we didn't want to kick out connections, and didn't want request blocks smaller than the standard 180KBytes. Maybe that patch used one of those methods? It's too long ago to remember. :)

The small change I did simply decides if 1, 2 or 3 blocks will be requested.

/zz B)
View Post


So, as I was already trying to tweak the Dazzle faster endgame code I thought I'd also have a look at this piece of code. My solution does not kick out connections,(though using it with a version of Dazzle's code is my final aim.), and does not request blocks smaller than 180Kb.
Unlike the official code my code works at the end of every part where there are mulitple sources and works on relative speed, because the problem is always there, the very slow source scenario is just the more noticeable manifestation. :)

In DownloadClient.cpp CUpDownClient::SendBlockRequests()

void CUpDownClient::SendBlockRequests() said:

    // prevent locking of too many blocks
    int blockCount = 3;
    if(IsEmuleClient() && m_byCompatibleClient==0 && reqfile->ReduceAllocatedBlocks(this ,this->m_lastPartAsked)){
      //near end of part with multiply sources, share the blocks out, slower sources have blockcount reduced first.     
      blockCount = 1;
      }

In Partfile.h

Partfile.h said:

public:
bool    ReduceAllocatedBlocks(CUpDownClient* calling_source,uint16  m_PartAsked);

In Partfile.cpp

bool CPartFile::ReduceAllocatedBlocks(CUpDownClient* calling_source, on uint16 m_PartAsked) *added*, said:

/*TK4 Mod: Returns true when the calling source would not complete it's 3 blocks before 'total transfer rate
for part' would complete the part minus those three blocks.This helps prevent a slow source locking blocks at the end of a part
causing sources which may not have any other parts we need to stop transfering. It also helps maintain a higher cumulative download
speed at the end of file and sometimes,(where a source has no other parts we need); at the end of part.*/

bool CPartFile::ReduceAllocatedBlocks(CUpDownClient* calling_source,uint16  m_PartAsked)
{
  // no 'part last asked for'
  if(m_PartAsked == 0xffff) return false;
  //Quick Check, if there is only one source overall return no 'part last asked for'
  if(m_downloadingSourceList.GetCount()<2)
    {//if this source is the only source don't let it lock blocks if its too slow
      if(calling_source->GetDownloadDatarate()<784) return true;
        else return false;
    } 
 
  uint16 sourcecount = 0;
  uint32 remainingdata = 0;
  uint32 otherstransferrate = 1;// a value of 1 to prevent 'divide by zero' (saves 1 + otherstransferrate later)
  const uint32 threeblocks = EMBLOCKSIZE * 3;

  /*Calculate total transfering sources for this part and overall transfer speed.*/
  for(POSITION pos = m_downloadingSourceList.GetHeadPosition(); pos != NULL;)
    {
      //get next downloading source
      CUpDownClient* cur_src = srclist.GetNext(pos);
      //is this source sending data for the same part as the calling source but is not the calling source
      if(cur_src->m_lastPartAsked==m_PartAsked && cur_src!=calling_source)
        {
          sourcecount++;//increment sources for this part
          otherstransferrate += cur_src->GetDownloadDatarate();//add this clients transfer rate to the total
        }
      }
 
  //if calling source is the only currently transfering source for this part use fixed reference
  if(sourcecount < 1)
    {//if this source is the only source don't let it lock blocks if its too slow
      if(calling_source->GetDownloadDatarate()<784) return true;
        else return false;
    }

  /* Calculate how much of the part is left in bytes (based on code from GetNextRequestedBlock() )*/
  // Offsets of 'this' chunk

  const uint32 uStart  = m_PartAsked * PARTSIZE;
  const uint32 uEnd    = (GetFileSize() - 1 < (uStart + PARTSIZE - 1)) ?
                                      GetFileSize() - 1 : (uStart + PARTSIZE - 1);
 
  ASSERT( uStart <= uEnd );
  if(uStart >= uEnd) return false;

  //gets bytes remaining
  for(POSITION pos = gaplist.GetHeadPosition(); pos != NULL; )
    {
      const Gap_Struct* cur_gap = gaplist.GetNext(pos);
      //Check if Gap is into the limit
      if(cur_gap->start < uStart)
        {
          if(cur_gap->end > uStart && cur_gap->end < uEnd) remainingdata += cur_gap->end - uStart + 1;
            else if(cur_gap->end >= uEnd) return false;
          } else
                    if(cur_gap->start <= uEnd)
                      {
                        if(cur_gap->end < uEnd) remainingdata += cur_gap->end - cur_gap->start + 1;
                          else                                remainingdata += uEnd - cur_gap->start + 1;
                        }
      }

 
  if(threeblocks > remainingdata) return true;//keep up combined download speed for as long as possible
    else remainingdata -= threeblocks;            //remainingdata to equal the remaining data after this client gets allocated 3 blocks

  //if(3 * EMBLOCKSIZE / 'calling source speed' >= 'remaining incomplete part size - 3 blocks'/'total other sources transfer rate') return true
  if((uint32)(threeblocks/(1 + calling_source->GetDownloadDatarate())) >= (uint32)(remainingdata/otherstransferrate)) return true;

  return false;
}


I know it has been pointed out by people who know a great deal more than I do that Netfinity's solution is the best approach. :flowers:

However I hope this code may present a simpler idea whilst still bringing about an improvement in performance. Thanks for reading! :)

edit: 19-11-2005 Bug fix in ReduceAllocatedBlocks() + add fix speed comparisons for single sources.
edit 25-11-2005 changed last if condition to >=

This post has been edited by BlueSonicBoy: 25 November 2005 - 05:31 PM

0

#2 User is offline   zz 

  • -
  • PipPipPipPipPipPipPip
  • Group: Debugger
  • Posts: 2014
  • Joined: 30-November 02

Posted 17 November 2005 - 03:50 PM

IIRC, the standard eMule's chunk choice algorithm tries to avoid the case where you request blocks from the same chunk for several sources, so this code would hardly ever trigger except for the last chunk.

(In ZZUL I instead use a chunk choice strategy that tries to finish chunks as soon as possible, putting all sources on the same chunk, and lots of other mods have similar strategies. Could be useful there.

My local code currently always request fewer and smaller blocks for really slow sources, even if it's not the last chunk. You never know when a faster source may let you download, so might just as well lock blocks as late as possible. I'll release that as part of ZZUL the next time I do a release, if you want to check yet another implementation that tries to speed up the end game.)

/zz B)

This post has been edited by zz: 17 November 2005 - 03:52 PM

ZZUL - get control of your uploads: ZZUL Forum
0

#3 User is offline   BlueSonicBoy 

  • Magnificent Member
  • PipPipPipPipPipPip
  • Group: Members
  • Posts: 396
  • Joined: 26-September 04

Posted 18 November 2005 - 12:07 AM

zz, on Nov 17 2005, 10:50 AM, said:

IIRC, the standard eMule's chunk choice algorithm tries to avoid the case where you request blocks from the same chunk for several sources,
View Post


Yes, it does:

QUOTE(bool CPartFile::GetNextRequestedBlock(CUpDownClient* sender @
Requested_Block_Struct** newblocks,
uint16* count))
      // Criterion 3. Request state (downloading in process from other source(s))
    const bool critRequested = cur_chunk.frequency > veryRareBound &&  // => CPU load
          IsAlreadyRequested(uStart, uEnd);
[/quote]


zz, on Nov 17 2005, 10:50 AM, said:

so this code would hardly ever trigger except for the last chunk.
View Post


I think there will be times when you are completing a chuck from sources which only have that chunk, or only that chuck you need even when you are more than 4 parts from completion of a large file. Of course the nearer to completing the file you are the more chance there is of this being the case. But is there any harm in having code that works for every chunk/part? :flowers:

I feel hard coded speed limits only address the worst case scenarios, but by reducing blockCount to 1 when a source would not complete it’s 3 blocks before all other sources for that part would complete the part you guarantee retaining more transferring sources longer and because this code will effect slow sources/or comparatively slow sources earlier, you make sure they can’t lock too many blocks for the faster sources.


zz, on Nov 17 2005, 10:50 AM, said:

My local code currently always request fewer and smaller blocks for really slow sources, even if it's not the last chunk. You never know when a faster source may let you download, so might just as well lock blocks as late as possible. I'll release that as part of ZZUL the next time I do a release, if you want to check yet another implementation that tries to speed up the end game.)


That sounds interesting ! :thumbup:

You are splitting blocks between sources?

:respect:
0

#4 User is offline   zz 

  • -
  • PipPipPipPipPipPipPip
  • Group: Debugger
  • Posts: 2014
  • Joined: 30-November 02

Posted 18 November 2005 - 04:41 PM

In the protocol, there's no such thing as blocks, it seems. The request just specifies start byte position and end byte position. I just make requests where these two positions are closer together than 180 KBytes. Nothing hard about that :), and I'm guessing Netfinity is doing the same thing (and did it first).

/zz B)
ZZUL - get control of your uploads: ZZUL Forum
0

#5 User is offline   netfinity 

  • Master of WARP
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 1658
  • Joined: 23-April 04

Posted 18 November 2005 - 08:21 PM

zz said:

In the protocol, there's no such thing as blocks, it seems.
Yes my thought too! It seems to be something invented by the eMule devs as eDonkey at that time probably asked for 180kB at a time. Nowadays eDonkey uses 475kB (sub-)part sizes and also request data in sets of 475kB. Offtopic: My upcoming NetF beta will translate those 475kB edonkey (sub-)parts into eMule 180kB AICH blocks, to enable download of incomplete parts.

About my algorithm
Since many posters claims my algorithm is the best, I will describe my algorithm in short. This is what makes my Mod to download at almost full rate until the last few seconds before completion.

Phase 1: Block request sizes

1. Estimate the time to completion.
time_to_complete = bytes_left_to_download / datarate_all_sources

2. Estimate the number of bytes to request so that source doesn't complete later than 10 seconds after the estimated file completion.
bytes_to_request = source_datarate / (time_to_complete + 10)

3. Split the request in three to fill the OP_PARTREQ, but round numbers to nearest 10kB.

Phase 2: Put the source on hold if it can't request enought bytes, without breaking the completion estimate
This means that if calculation in phase 1 returned less than 10kB then the source is put on hold for 20 seconds, then it's dropped if no new blocks becomes available. Note this is only done when the source has sent all the pieces we requested. This is done in case the other sources fails or the part is corrupt, then we can recontinue download with this source.

Phase 3: Dropping slow sources
This phase is only entered if a fast source can't allocate any more blocks. Here we drop in the middle of a transaction, thus data will be lost.

1. Find the slowest source that is slower than 1kB/s and would break the completion estimate.
The calculations is basicaly the same as in phase 1.

2. Cancel the download and free all allocated blocks
This causes data loss (upto ~20kB), as there is likely to be incomplete packets transfer. You have to take much care when doing this!


NOTE! My actual algorithm is much more complex, but this is the primary basics. Also, I always keep my request within the 180kB AICH block bounds in hope to handle curruption better.

/netfinity

[EDIT]Typed the wrong size for eDonkey crumbs[/EDIT]

This post has been edited by netfinity: 18 November 2005 - 09:25 PM

eMule v0.50a [NetF WARP v0.3a]
- Compiled for 32 and 64 bit Windows versions
- Optimized for fast (100Mbit/s) Internet connections
- Faster file completion via Dynamic Block Requests and dropping of stalling sources
- Faster searching via KAD with equal or reduced overhead
- Less GUI lockups through multi-threaded disk IO operations
- VIP "Payback" queue
- Fakealyzer (helps you chosing the right files)
- Quality Of Service to keep eMule from disturbing VoIP and other important applications (Vista/7/8 only!)
0

#6 User is offline   SlugFiller 

  • The one and only master slug
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 6988
  • Joined: 15-September 02

Posted 18 November 2005 - 10:12 PM

I remember there was a bug that caused eMule not to upload to eDonkey because it doesn't accept requests larger than 180k, probably because it could be used for cheating Tarod's full chunk transfer(which is no longer used), and also bares some impact on timed cycling.

At the time I created a feature for splitting requested blocks if requests aren't chunk-aligned and in 180k blocks.
This is problematic for eMule, since the inflate call does not reset the stream on a final block read, and does not attempt to read a second stream, and since streams are per-block the split block would not be read correctly.
However, since eMule already chunk-aligns requests, and does not exceed 180k per block request, this is a non-issue. It only effects eDonkey which doesn't use compressed packets anyway.

Note that it wouldn't be that difficult to add multi-stream reading to eMule's compressed packet handling, if you want to support unaligned packets.
Why haven't you clicked yet?

SlugFiller rule #1: Unsolicited PMs is the second most efficient method to piss me off.
SlugFiller rule #2: The first most efficient method is unsolicited eMails.
SlugFiller rule #3: If it started in a thread, it should end in the same thread.
SlugFiller rule #4: There is absolutely no reason to perform the same discussion twice in parallel, especially if one side is done via PM.
SlugFiller rule #5: Does it say "Group: Moderators" under my name? No? Then stop telling me about who you want to ban! I really don't care! Go bother a moderator.
SlugFiller rule #6: I can understand English, Hebrew, and a bit of Japanese(standard) and Chinese(mandarin), but if you speak to me in anything but English, do expect to be utterly ignored, at best.
0

#7 User is offline   BlueSonicBoy 

  • Magnificent Member
  • PipPipPipPipPipPip
  • Group: Members
  • Posts: 396
  • Joined: 26-September 04

Posted 21 November 2005 - 03:46 AM

zz, on Nov 18 2005, 11:41 AM, said:

In the protocol, there's no such thing as blocks, it seems.
View Post


Thank you. That is a useful thing to know. :)

Given that I was wondering... Instead of asking for 1 block when a source is slow which excludes non-eMule clients. Would there be any reason why we can't get one block from GetNextRequestedBlock(), which I think does a great job of selecting blocks, then split that one block into three and make a three block request for 180K or less?

I wrote this code to test the idea and I am testing it now, it seems to work... But I am sure you guys,(and girls), will be able to spot problems I would never see,(until too late)... :flowers:

So...

in DownClient.cpp

DownClient.cpp void CUpDownClient::CreateBlockRequests(int iMaxBlocks) said:

void CUpDownClient::CreateBlockRequests(int iMaxBlocks)
{
  ASSERT( iMaxBlocks >= 1 /*&& iMaxBlocks <= 3*/ );
  if(m_DownloadBlocks_list.IsEmpty())
    {
      uint16 count;
      //start block splitting code
      if(iMaxBlocks==1 && !m_PendingBlocks_list.GetCount())
        {//Block count reduction has activated convert the 1 block request to 3 blocks
        count = iMaxBlocks;
        Requested_Block_Struct** toadd = new Requested_Block_Struct*[1];
        if(reqfile->GetNextRequestedBlock(this,toadd,&count))
          {//splits 1 block into 3. generally 1 180Kb block into 3 60Kb blocks
            Requested_Block_Struct* block_one = new Requested_Block_Struct;
            Requested_Block_Struct* block_two = new Requested_Block_Struct;
            Requested_Block_Struct* block_thr = new Requested_Block_Struct;
            uint32 thirdblocksize = (toadd[0]->EndOffset - toadd[0]->StartOffset)/3;
            /*First Block*/
            block_one->StartOffset = toadd[0]->StartOffset;
            block_one->EndOffset = toadd[0]->StartOffset + thirdblocksize;
            md4cpy(block_one->FileID, toadd[0]->FileID);
            block_one->transferred = 0;
            /*Second Block*/
            block_two->StartOffset = block_one->EndOffset + 1;
            block_two->EndOffset = block_two->StartOffset + thirdblocksize;
            md4cpy(block_two->FileID, toadd[0]->FileID);
            block_two->transferred = 0;
            /*Third Block*/
            block_thr->StartOffset = block_two->EndOffset + 1;
            block_thr->EndOffset = toadd[0]->EndOffset;
            md4cpy( block_thr->FileID, toadd[0]->FileID);
            block_thr->transferred = 0;
            //add the 3 blocks
            reqfile->SwitchBlocks( block_one, block_two, block_thr );
            delete[] toadd;
            m_DownloadBlocks_list.AddTail( block_one );
            m_DownloadBlocks_list.AddTail( block_two );
            m_DownloadBlocks_list.AddTail( block_thr );
            while(m_PendingBlocks_list.GetCount() < 3 && !m_DownloadBlocks_list.IsEmpty())
                {
                  Pending_Block_Struct* pblock = new Pending_Block_Struct;
                  pblock->block = m_DownloadBlocks_list.RemoveHead();
                  m_PendingBlocks_list.AddTail(pblock);
                  }
            return;
            } else
                    {
                      delete[] toadd;
                        return;
                        }
    }//End block splitting code

  if(iMaxBlocks > m_PendingBlocks_list.GetCount()) count = iMaxBlocks - m_PendingBlocks_list.GetCount();
    else count = 0;
 
  Requested_Block_Struct** toadd = new Requested_Block_Struct*[count];

  if(reqfile->GetNextRequestedBlock(this,toadd,&count))
  {
    for (int i = 0; i < count; i++)
    m_DownloadBlocks_list.AddTail(toadd[ i ]);
    }

delete[] toadd;
}


in DownClient.cpp my version

DownClient.cpp void CUpDownClient::SendBlockRequests() said:

    int blockCount = 3;
if( reqfile->ReduceAllocatedBlocks(this ,this->m_lastPartAsked) ){
      //near end of part with multiply sources, share the blocks out, slower sources have blockcount reduced first.     
  blockCount = 1;
      }

OR

in DownClient.cpp Official version

DownClient.cpp void CUpDownClient::SendBlockRequests() said:

// prevent locking of too many blocks when we are on a slow (probably standby/trickle) slot
    int blockCount = 3;
    if(reqfile->GetFileSize()-reqfile->GetCompletedSize() <= PARTSIZE*4) {
        if(GetDownloadDatarate() < 600) {
            blockCount = 1;
          } else if(IsEmuleClient() && m_byCompatibleClient==0 && GetDownloadDatarate() < 1200) {
                      blockCount = 2;
                    }
    }



in PartFile.h

Quote

 
public:
void    SwitchBlocks(Requested_Block_Struct* block_one,Requested_Block_Struct* block_two,Requested_Block_Struct* block_thr,Requested_Block_Struct** toadd );


in PartFile.cpp Fixed 22-11-05

PartFile.cpp void CPartFile::SwitchBlocks(Requested_Block_Struct* block_one, on Requested_Block_Struct* block_two,Requested_Block_Struct* block_thr,Requested_Block_Struct** toadd), said:

void CPartFile::SwitchBlocks(Requested_Block_Struct* block_one,Requested_Block_Struct* block_two,Requested_Block_Struct* block_thr,Requested_Block_Struct** toadd )
{
    POSITION pos;
    POSITION posLast;
    //find the 1 full size block added
    posLast = pos = requestedblocks_list.GetTailPosition();
    Requested_Block_Struct* block = requestedblocks_list.GetNext(pos);
    if(toadd[0]->StartOffset == block->StartOffset && toadd[0]->EndOffset == block->EndOffset)
      {
        delete block; //delete full size block
        requestedblocks_list.RemoveAt(posLast); //remove it from the list
      } else
                {//RemoveBlockFromList() + delete block;
                  for(pos = requestedblocks_list.GetHeadPosition(); pos != NULL; )
                    {
                      posLast = pos;
                      Requested_Block_Struct* block = requestedblocks_list.GetNext(pos);
                      if(block->StartOffset <= toadd[0]->StartOffset && block->EndOffset >= toadd[0]->EndOffset)
                        {
                          delete block; //delete full size block
                          requestedblocks_list.RemoveAt(posLast);
                          }
                      }
                  }

        //add the 3 1/3 size blocks
        requestedblocks_list.AddTail(block_one);
        requestedblocks_list.AddTail(block_two);
        requestedblocks_list.AddTail(block_thr);
}


Thanks for looking! :)

This post has been edited by BlueSonicBoy: 23 November 2005 - 02:43 AM

0

#8 User is offline   tHeWiZaRdOfDoS 

  • Man, what a bunch of jokers...
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 5630
  • Joined: 28-December 02

Posted 22 November 2005 - 01:43 PM

Finally I found some time to read this thread...

Remarks:
- you should use
reqfile->RemoveBlockFromList(toadd[0]->StartOffset, toadd[0]->EndOffset);
because you can't be sure that the last block on the list is the one you just created (because of multithreading)

- it'd be better to change the function/code - I'd suggest that:

Quote

// prevent locking of too many blocks when we are on a slow (probably standby/trickle) slot
int blockCount = 3;
if(IsEmuleClient() && m_byCompatibleClient==0
  && reqfile->GetFileSize()-reqfile->GetCompletedSize() <= PARTSIZE*4)
{
  // if there's less than two chunks left, request fewer blocks for
  // slow downloads, so they don't lock blocks from faster clients.
  // Only trust eMule clients to be able to handle less blocks than three
  if(GetDownloadDatarate() < /*600*/1024)
   blockCount = 1;
  else if(GetDownloadDatarate() < 1200)
   blockCount = 2; 

}
CreateBlockRequests(blockCount);

CreateBlockRequests();

Quote

void CUpDownClient::CreateBlockRequests(int iMaxBlocks)
{
  if(iMaxBlocks != -1) //just keep the possibility (e.g. for webcache)
{
  iMaxBlocks = 3
  // prevent locking of too many blocks when we are on a slow (probably standby/trickle) slot 
  if(IsEmuleClient() && m_byCompatibleClient==0
  && reqfile->GetFileSize()-reqfile->GetCompletedSize() <= PARTSIZE*4)
  {
  // if there's less than two chunks left, request fewer blocks for
  // slow downloads, so they don't lock blocks from faster clients.
  // Only trust eMule clients to be able to handle less blocks than three
  if(GetDownloadDatarate() < [COLOR=GREEN]/*600*/1024
)
    iMaxBlocks = 1;

  }
}

ASSERT( iMaxBlocks >= 1 /*&& iMaxBlocks <= 3*/ );

And of course adapt the header, too...

Quote

void  CreateBlockRequests(int iMaxBlocks = -1);

This post has been edited by tHeWiZaRdOfDoS: 22 November 2005 - 01:47 PM

0

#9 User is offline   SlugFiller 

  • The one and only master slug
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 6988
  • Joined: 15-September 02

Posted 22 November 2005 - 06:04 PM

Quote

because of multithreading

Where is the block-list multi-threaded?
Everything is done through the main thread when it comes to download. Even in upload, the blocks are handled by the main thread(pre-emptively in ZZUL, through message callback in SFBT).
Why haven't you clicked yet?

SlugFiller rule #1: Unsolicited PMs is the second most efficient method to piss me off.
SlugFiller rule #2: The first most efficient method is unsolicited eMails.
SlugFiller rule #3: If it started in a thread, it should end in the same thread.
SlugFiller rule #4: There is absolutely no reason to perform the same discussion twice in parallel, especially if one side is done via PM.
SlugFiller rule #5: Does it say "Group: Moderators" under my name? No? Then stop telling me about who you want to ban! I really don't care! Go bother a moderator.
SlugFiller rule #6: I can understand English, Hebrew, and a bit of Japanese(standard) and Chinese(mandarin), but if you speak to me in anything but English, do expect to be utterly ignored, at best.
0

#10 User is offline   tHeWiZaRdOfDoS 

  • Man, what a bunch of jokers...
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 5630
  • Joined: 28-December 02

Posted 22 November 2005 - 07:09 PM

CreateBlockRequests is not called from a periodic timer or thread but when you SendBlockRequests which *might* happen several times simultaneously/short after another - causing the block at the tail to be different to the block you want...


BTW... I think the GetNextRequested or CreateBlockRequests should be limited to one call at a time because this might be the reason for many of the "alreadyrequested" situations, don't you think?
0

#11 User is offline   BlueSonicBoy 

  • Magnificent Member
  • PipPipPipPipPipPip
  • Group: Members
  • Posts: 396
  • Joined: 26-September 04

Posted 23 November 2005 - 02:49 AM

tHeWiZaRdOfDoS, on Nov 22 2005, 02:09 PM, said:

CreateBlockRequests is not called from a periodic timer or thread but when you SendBlockRequests which *might* happen several times simultaneously/short after another - causing the block at the tail to be different to the block you want...
View Post


Thanks for looking at my code! :)

I have justed edit my post and fixed that potential problem. Thanks for pointing it out, I should have check that block before removing it regardless really!
0

#12 User is offline   SlugFiller 

  • The one and only master slug
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 6988
  • Joined: 15-September 02

Posted 23 November 2005 - 05:36 PM

Quote

CreateBlockRequests is not called from a periodic timer or thread but when you SendBlockRequests which *might* happen several times simultaneously/short after another

SendBlockRequest is only called either from ProcessBlockPacket or from StartDownload which traces back to ProcessExtPacket and ProcessAcceptUpload.
Either way, it traces back to the packet handling, which is single-threaded, for all users, as it runs in the GUI thread. Even if it had a seperate thread for each socket(yeah, we'd wish), each call to SendBlockRequest would block the socket's handling until it returns and prevent a second call.

Threading issues do not apply here.
Besides, if they did, the whole concept of accessing a list without a lock would have been a much greater concern.
Why haven't you clicked yet?

SlugFiller rule #1: Unsolicited PMs is the second most efficient method to piss me off.
SlugFiller rule #2: The first most efficient method is unsolicited eMails.
SlugFiller rule #3: If it started in a thread, it should end in the same thread.
SlugFiller rule #4: There is absolutely no reason to perform the same discussion twice in parallel, especially if one side is done via PM.
SlugFiller rule #5: Does it say "Group: Moderators" under my name? No? Then stop telling me about who you want to ban! I really don't care! Go bother a moderator.
SlugFiller rule #6: I can understand English, Hebrew, and a bit of Japanese(standard) and Chinese(mandarin), but if you speak to me in anything but English, do expect to be utterly ignored, at best.
0

#13 User is offline   tHeWiZaRdOfDoS 

  • Man, what a bunch of jokers...
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 5630
  • Joined: 28-December 02

Posted 24 November 2005 - 12:51 PM

Touché! :lol:

Still, it doesn't hurt to be sure :flowers:

BTW @BSB - that's my current version:

Quote

void CUpDownClient::CreateBlockRequests(int iMaxBlocks)
{
ASSERT( iMaxBlocks >= 1 /*&& iMaxBlocks <= 3*/ );

if (m_DownloadBlocks_list.IsEmpty())
{
  uint16 count = 0;
//>>> Split Reqs
  if(iMaxBlocks == -1)
  {
   iMaxBlocks = 3;
   // prevent locking of too many blocks when we are on a slow (probably standby/trickle) slot 
   if(/*IsEmuleClient() && m_byCompatibleClient==0
    &&*/ reqfile->GetFileSize()-reqfile->GetCompletedSize() <= PARTSIZE*4)
   {
    // if there's less than two chunks left, request fewer blocks for
    // slow downloads, so they don't lock blocks from faster clients.
    // Only trust eMule clients to be able to handle less blocks than three
    if(GetDownloadDatarate() < /*600*/1024)
     iMaxBlocks = 1;
    /*else if(GetDownloadDatarate() < 1200)
     iMaxBlocks = 2;*/
   }
   if(iMaxBlocks == 1 && !m_PendingBlocks_list.GetCount())     
   {
    //Block count reduction has activated convert the 1 block request to 3 blocks
    count = iMaxBlocks;
    Requested_Block_Struct** toadd = new Requested_Block_Struct*[1];
    if(reqfile->GetNextRequestedBlock(this, toadd, &count))
    {
     //splits 1 block into 3. generally 1 180Kb block into 3 60Kb blocks          
     Requested_Block_Struct* block_one = new Requested_Block_Struct;
     Requested_Block_Struct* block_two = new Requested_Block_Struct; 
     Requested_Block_Struct* block_thr = new Requested_Block_Struct;
     const uint32 thirdblocksize = (toadd[0]->EndOffset - toadd[0]->StartOffset)/3;

     //First Block
     block_one->StartOffset = toadd[0]->StartOffset;
     block_one->EndOffset = toadd[0]->StartOffset + thirdblocksize;
     md4cpy(block_one->FileID, toadd[0]->FileID);    
     block_one->transferred = 0;    
     reqfile->AddBlockToList(block_one);
     m_DownloadBlocks_list.AddTail(block_one);
 
     //Second Block          
     block_two->StartOffset = block_one->EndOffset + 1;
     block_two->EndOffset = block_two->StartOffset + thirdblocksize;
     md4cpy(block_two->FileID, toadd[0]->FileID);           
     block_two->transferred = 0;   
     reqfile->AddBlockToList(block_two);
     m_DownloadBlocks_list.AddTail(block_two);

     //Third Block
     block_thr->StartOffset = block_two->EndOffset + 1;
     block_thr->EndOffset = toadd[0]->EndOffset;        
     md4cpy( block_thr->FileID, toadd[0]->FileID);   
     block_thr->transferred = 0;               
     reqfile->AddBlockToList(block_thr);
     m_DownloadBlocks_list.AddTail(block_thr);

     //CLEAR the just created block
     reqfile->RemoveBlockFromList(toadd[0]->StartOffset, toadd[0]->EndOffset); //EDIT
     delete[] toadd[0];
     delete[] toadd;

     while (m_PendingBlocks_list.GetCount() < 3 && !m_DownloadBlocks_list.IsEmpty())
     {
      Pending_Block_Struct* pblock = new Pending_Block_Struct;
      pblock->block = m_DownloadBlocks_list.RemoveHead();
      m_PendingBlocks_list.AddTail(pblock);
     }

     return; //done
    }
    else              
    {                  
     delete[] toadd;       
     return;                      
    }  
   }
  } //else flow over to the default system...
//<<< Split Reqs

  if(iMaxBlocks > m_PendingBlocks_list.GetCount())
   count = iMaxBlocks - m_PendingBlocks_list.GetCount();
 
  if(count)
  {
   Requested_Block_Struct** toadd = new Requested_Block_Struct*[count];
   if (reqfile->GetNextRequestedBlock(this, toadd, &count))
   {
    for (uint8 i = 0; i < count; i++)
     m_DownloadBlocks_list.AddTail(toadd[i]);
   
   }
   delete[] toadd;              
  }
}

while (m_PendingBlocks_list.GetCount() < iMaxBlocks && !m_DownloadBlocks_list.IsEmpty())
{
  Pending_Block_Struct* pblock = new Pending_Block_Struct;
  pblock->block = m_DownloadBlocks_list.RemoveHead();
  m_PendingBlocks_list.AddTail(pblock);
}
}


Seems to work like a charm... :devil: :punk:

This post has been edited by tHeWiZaRdOfDoS: 24 November 2005 - 09:28 PM

0

#14 User is offline   BlueSonicBoy 

  • Magnificent Member
  • PipPipPipPipPipPip
  • Group: Members
  • Posts: 396
  • Joined: 26-September 04

Posted 24 November 2005 - 05:03 PM

tHeWiZaRdOfDoS, on Nov 24 2005, 07:51 AM, said:

BTW @BSB - that's my current version:

View Post


in bool CPartFile::GetNextRequestedBlock() this block is created

QUOTE(bool CPartFile::GetNextRequestedBlock(CUpDownClient* sender @
Requested_Block_Struct** newblocks,
uint16* count))
Requested_Block_Struct* pBlock = new Requested_Block_Struct;
[/quote]

Shouldn't we deleted it after removing the block from the list?
Not deleting it would not stop the code from working because it's no longer referenced by anything? :flowers:

Quote

    if(toadd[0]->StartOffset == block->StartOffset && toadd[0]->EndOffset == block->EndOffset)
      {
        delete block; //delete full size block
        requestedblocks_list.RemoveAt(posLast); //remove it from the list
      }

This post has been edited by BlueSonicBoy: 24 November 2005 - 05:06 PM

0

#15 User is offline   tHeWiZaRdOfDoS 

  • Man, what a bunch of jokers...
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 5630
  • Joined: 28-December 02

Posted 24 November 2005 - 05:23 PM

I thought I did that....

tHeWiZaRdOfDoS, on Nov 24 2005, 01:51 PM, said:

Quote

void CUpDownClient::CreateBlockRequests(int iMaxBlocks)
{
...
...
    Requested_Block_Struct** toadd = new Requested_Block_Struct*[1];
    if(reqfile->GetNextRequestedBlock(this, toadd, &count))
    {
...
...
     //CLEAR the just created block
     reqfile->RemoveBlockFromList(toadd[0]->StartOffset, toadd[0]->EndOffset); //EDIT
     delete[] toadd;
...
...

0

#16 User is offline   BlueSonicBoy 

  • Magnificent Member
  • PipPipPipPipPipPip
  • Group: Members
  • Posts: 396
  • Joined: 26-September 04

Posted 24 November 2005 - 06:04 PM

tHeWiZaRdOfDoS, on Nov 24 2005, 12:23 PM, said:

I thought I did that....

tHeWiZaRdOfDoS, on Nov 24 2005, 01:51 PM, said:

Quote

void CUpDownClient::CreateBlockRequests(int iMaxBlocks)
{
...
...
    Requested_Block_Struct** toadd = new Requested_Block_Struct*[1];
    if(reqfile->GetNextRequestedBlock(this, toadd, &count))
    {
...
...
    //CLEAR the just created block
    reqfile->RemoveBlockFromList(toadd[0]->StartOffset, toadd[0]->EndOffset); //EDIT
    delete[] toadd;
...
...

View Post


I'm unsure? Doesn't delete[] toadd; just delete Requested_Block_Struct** toadd = new Requested_Block_Struct*[1]; not the object it points to?
reqfile->RemoveBlockFromList() only removes the block from the list???? :confused: :flowers:
0

#17 User is offline   SlugFiller 

  • The one and only master slug
  • PipPipPipPipPipPipPip
  • Group: Members
  • Posts: 6988
  • Joined: 15-September 02

Posted 24 November 2005 - 06:50 PM

Yes, it should be "delete toadd[0]" followed by "deleted[] toadd". The extra * prevents a one-step deletion, deleting an array of pointers does not delete the objects to which they point.
Why haven't you clicked yet?

SlugFiller rule #1: Unsolicited PMs is the second most efficient method to piss me off.
SlugFiller rule #2: The first most efficient method is unsolicited eMails.
SlugFiller rule #3: If it started in a thread, it should end in the same thread.
SlugFiller rule #4: There is absolutely no reason to perform the same discussion twice in parallel, especially if one side is done via PM.
SlugFiller rule #5: Does it say "Group: Moderators" under my name? No? Then stop telling me about who you want to ban! I really don't care! Go bother a moderator.
SlugFiller rule #6: I can understand English, Hebrew, and a bit of Japanese(standard) and Chinese(mandarin), but if you speak to me in anything but English, do expect to be utterly ignored, at best.
0

  • Member Options

Page 1 of 1

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users