Page 17 of 33 FirstFirst ... 713141516171819202127 ... LastLast
Results 321 to 340 of 644
  1. #321
    Community Member Flavilandile's Avatar
    Join Date
    Aug 2010
    Posts
    0

    Default

    Oh dear, must chime in... Resisting is not possible.

    Quote Originally Posted by mna View Post
    I think we may have found the problem, and funny how the chat was reported to be working even during lag spikes...

    In this case, seems that UDP packets are dropped regularly due to PMTUD not working AND there being a MTU bottleneck.
    There's lots of reasons why UDP packets can be dropped...

    At the top of my head :

    At the ingress of a router... because the QOS applied to that ingress say to drop the burst of packets that comes in beyond what has been configured.
    At the ingress of a router... because there's just too many packets trying to ingress at the same time, and the QOS say to drop UDP in that case.
    At the ingress of a router... because there's just too many packets trying to ingress at the same time and there's no way to recover except by dropping packets.
    In the backplane of a router... because the backplane is overloaded by traffic from other cards.
    At the egress of a router... All the ingress reasons above.

    ( rinse/repeat at each router that appears in a traceroute.... and all the ones that do not appear in the traceroute because the packets are in the LDP backbone )

    Note that none of them talk about MTU. Because MTU is pointless when you have Terabit Interfaces, any size will go through, even 16Kb jumbo packets.
    And actually there's more problems with small packets : 48 bytes to 512 bytes than with 4Kb packets. because small packets comes faster and can overload
    the ingress/egress processor while larger packets will give them more time to do their job.

    MTU can only be an issue on LANs. But from what I've seen it's probably more some scripts in the game server that don't like the 64bit new systems.

    Quote Originally Posted by mna View Post
    The "correct" way to go about this would probably be to run this thing over DCCP... maybe SCTP or some such might also work better than the current situation... but in the short term (as in less than 5 years ) the best bet would be to just fix the routers so that UDP works again, unreliable though it may be.
    Do you know what SCTP is and what it's used for ? Really.... I'm asking that question because I think you're basing your correct way on wrong assumptions regarding that protocol.

    You need to create a static link between the two ends (It cannot be dynamic, only the switchover to another link can be dynamic if you have a multihoming configuration ).
    Then you have a semi reliable plesiochronous transport tunnel that can be used to carry anything that can be a stream of bytes set up over IP ( it's at the same level as TCP ).
    The main use for it is in Telecomunication networks to carry SS7 control messages and to transfer over IP the voice/data/whatever 64K channels of SS7 instead of having to set up PCM links
    ( T1s in the US, E1s in Europe ). It allows to have a lot more traffic on a cable than PCM can do ( T1 = 1,44Mb; E1 = 2Mb; Ethernet, Optical 1 or 10Gb are common in aggregation, 1Tb is regularly seen in backbones ) [ Sigtran for the old generation equipments and Diameter for the newer ones ]

    Edit : oh and doing traceroutes to the gls server is mostly pointless ( except for the few cases where there's a network issue outside of Turbine ), what we need is having the servers answer to the traceroutes ( they have stopped doing that in 2011 ) and their IPs ( it changed with the datacenter move )
    Last edited by Flavilandile; 03-17-2016 at 01:54 PM.
    On G-Land : Flavilandile, Blacklock, Yaelle, Millishande, Larilandile, Gildalinde, Tenalafel, and many other...

  2. #322
    Community Member
    Join Date
    May 2013
    Posts
    1,074

    Default

    Quote Originally Posted by Flavilandile View Post
    There's lots of reasons why UDP packets can be dropped...

    At the top of my head :

    At the ingress of a router... because the QOS applied to that ingress say to drop the burst of packets that comes in beyond what has been configured.
    At the ingress of a router... because there's just too many packets trying to ingress at the same time, and the QOS say to drop UDP in that case.
    At the ingress of a router... because there's just too many packets trying to ingress at the same time and there's no way to recover except by dropping packets.
    In the backplane of a router... because the backplane is overloaded by traffic from other cards.
    At the egress of a router... All the ingress reasons above.

    ( rinse/repeat at each router that appears in a traceroute.... and all the ones that do not appear in the traceroute because the packets are in the LDP backbone )

    Note that none of them talk about MTU. Because MTU is pointless when you have Terabit Interfaces, any size will go through, even 16Kb jumbo packets.
    And actually there's more problems with small packets : 48 bytes to 512 bytes than with 4Kb packets. because small packets comes faster and can overload
    the ingress/egress processor while larger packets will give them more time to do their job.

    MTU can only be an issue on LANs. But from what I've seen it's probably more some scripts in the game server that don't like the 64bit new systems.
    MTU mismatch very much can be an issue in any kind of a heterogenous network, and people are still occasionally getting it wrong on the Internet backbone after all these decades.

    You are entirely correct about larger frames being more efficient, that is exactly the reason why people use them.

    Now, in addition to all the perfectly valid reasons you listed - remember that we had router hops that are "known" to exist but don't respond to ping due to ICMP being blocked completely,... thus an ICMP needs-fragmentation reply from them is also blocked and the datagram therefore either needs to be fragmented, or it's dropped without notification. As it happens some of these have also been set to not fragment anything due to efficiency reasons.

    There's also the alternative that the routers don't appear in traceroute because they're doing something funny with the packet TTL.

    That one problem at work was actually a combination of the latter two. I still don't know what the backbone designers were thinking, but this is what we got. (That job was years ago and all. Largish corporate network.)



    Quote Originally Posted by Flavilandile View Post
    Do you know what SCTP is and what it's used for ? Really.... I'm asking that question because I think you're basing your correct way on wrong assumptions regarding that protocol.
    Um, what? Do you mean that SCTP would be more correct than DCCP, then? Because I meant that DCCP would be the "correct way", and that even SCTP could be better than what we have now...

    Quote Originally Posted by Flavilandile View Post
    You need to create a static link between the two ends (It cannot be dynamic, only the switchover to another link can be dynamic if you have a multihoming configuration ).
    Then you have a semi reliable plesiochronous transport tunnel that can be used to carry anything that can be a stream of bytes set up over IP ( it's at the same level as TCP ).
    The main use for it is in Telecomunication networks to carry SS7 control messages and to transfer over IP the voice/data/whatever 64K channels of SS7 instead of having to set up PCM links
    ( T1s in the US, E1s in Europe ). It allows to have a lot more traffic on a cable than PCM can do ( T1 = 1,44Mb; E1 = 2Mb; Ethernet, Optical 1 or 10Gb are common in aggregation, 1Tb is regularly seen in backbones ) [ Sigtran for the old generation equipments and Diameter for the newer ones ]
    Main use, yes, especially in volume; not sole use, and it's not by any means restricted to that. (Yes, telecommunications is where I've dealt with it too. And besides, PCM was already a lot better than some of the alternatives... I didn't particularly enjoy the V.35 cabling, for example.)

    Multihoming is optional, AND so is message ordering; this latter property is where it's a significant improvement over TCP in this case. Besides, it can also encapsulate distinct streams within a single association, which also has potential. It's also no more static than TCP. It may be overkill, but it's a lot more mature and available than DCCP. (The telecom folks have also tested it fairly thoroughly...)

    Also, there's a nifty LD_PRELOAD hack typically around on Linux to make preexisting TCP applications run with SCTP instead, for some quick testing even though you miss out on some of the extra features that way.
    Code:
    withsctp(1)                 General Commands Manual                withsctp(1)
    
    NAME
           withsctp - Run TCP binaries over SCTP
    
    SYNOPSIS
           withsctp
    
    DESCRIPTION
           This package enables you to use SCTP with your existing TCP-based bina?
           ries.  withsctp uses the LD_PRELOAD hack  to  intercept  library  calls
           used for TCP connections and use SCTP instead.
    
    EXAMPLES
           withsctp xinetd
                  # Start xinetd stream services on SCTP.
    
           withsctp telnet localhost
                  # Make a telnet over SCTP/IP connection.
    
    AUTHOR
           Michael Biebl <biebl@debian.org>
    
                                                                       withsctp(1)
    Last edited by mna; 03-17-2016 at 03:50 PM.
    No longer completely f2p as of November 2014. Father of a few more DDO players.

  3. #323
    Community Member IronClan's Avatar
    Join Date
    Jan 2010
    Posts
    0

    Default

    Quote Originally Posted by -Avalon- View Post
    Disagree almost completely. Most people I know, while not even close to everyone, is well over 100 players, and only a small few complained about the lag prior to move in such a way that they could be counted
    100 people you claim to know is far too small a sample size to be considered relevant.

    Now of course you don't REALLY know 100 people and in any case you've also not polled them in any manner so even if you did it's irrelevant.

    My actual posted objective evidence (screen shots of whole groups complaining about the lag that is currently effecting everyone in the group) suggests that lag pre data center move is quite a familiar thing among players who raid and participate in higher level content regularly on the more populated servers. In fact I would say about 70 or 80% of people I run into treat lag as an expected part of the game now. But that is just an anecdotal guess. However it's SO PREVALENT that I can't fathom these posts where people claim they never have lag and :"their 100 friends rarely have lag"... I can only guess at the actual origin of such narratives.

    That lag has gotten worse for about the last 2 years is pretty obvious unless you're a solo'er or person who plays on a low pop server or who doesn't raid regularly. If I could take screen shots of people TALKING about lag over voice chat I would literally have hundreds of examples of diverse players acknowledging how bad lag is across the entire group.

    Here's one thing I have literally never ever seen:

    In a lagged out raid 1 2 or more people saying they are unaffected by the lag and why are you 11, 10, 9 etc. players standing still piking.

    Not one time in the history of the game have I been in a raid where SOME of the people were totally lag free while the other people complain about not being able to move or hit anything. I've seen people LESS affected by lag stutter their way to the goal line so to speak, I've even seen people who were frozen solid, become unstuck and finish, but I've never once heard anyone say:

    "what are you guys talking about there is no lag, I am fine I guess I'll finish up for you slackers "voices""

    Not a once...
    Last edited by IronClan; 03-17-2016 at 04:15 PM.

  4. #324
    Community Member
    Join Date
    Mar 2014
    Posts
    28

    Default Lag at low level quests

    Hey, the lag is effecting the lower levels too. It is making it extra-tough for my permadeath guild to stay alive! We normally do not have any consistent lag events in levels 1-7 quests over the last year plus of play but we are now having them ever since the move to the new data center.

    I play a version of permadeath (so no hires, 1st life toons, no store gear, that is nothing special giving off fancy visual effects) and have seen brief but continuous lag events (freeze, rubberbanding, no visual spell/trap effects etc.) even in Korthos. We (a 3 toon party @ 3rd level) had a 2/16 lag-death in Tangleroot Elite "Hobgoblin Captives" and the lag is hurting or making our play style more difficult.

    One specific incident. 3 party members @ 4th level running Sharn Syndicate on hard with a party of 3 permadeathers no hires 1 pet (Cleric Fighter Artificer),

    1. What time it happened
    2. What server you are on
    3. what is your ISP
    4. What are you seeing lag wise (hitching, freezing, rubber banding, etc)
    5. Are these being seen by everyone in your party
    6. results of a tracert and pathping

    1. 8-9:30pm EST 3/16/2016
    2. Khyber
    3. AT&T
    4. rubberbanding, brief stutter jumps, brief party member freezes of 1-5 secs, see other party members running in place as I move freely and the reverse, late damage, rubberbanding monsters, 1-5 sec freezing in combat both monsters and party members, lag in market and all quests.
    5. Yes. Example Bookbinder quest open screen (where pick N/H/E) would not open for anyone for about 2 mins, we move away and it briefly flashes & disappears for whole party. then it works for everyone.
    6. Sorry, no trace.

    Hope that helps!

  5. #325
    Community Member haku-ba's Avatar
    Join Date
    Oct 2009
    Posts
    306

    Default

    Haven't run any Legendary Shrouds recently. Gave in last night (Aus time) and ran one to help guildies. Lag was aweful. Certainly it was much worse than anything I had experienced before the server move. We never made it past a few portals in part 1. Clickies were taking 10+ seconds to go off. Jumps could hover in the air for 30+ seconds. Action boosts were delayed etc etc.

    It was as much fun as banging my head on the desk would have been. Shortmanning raids seems to still be doable. We have managed to complete plenty of two/three man DoJ runs and MoD runs without too many problems. Full group runs in Hound and Shroud have not been fun. General end game quests like shroud flagging runs have been 'OK', particular solo or short manned. At least we finished them!

    Anything that can be done to help improve the situation of the game would be appreciated. I can only hope this doesn't have a long term affect on the game's future.

  6. #326
    Community Member
    Join Date
    Jun 2010
    Posts
    3,102

    Default

    Hi,

    Just got my first tracert result since the data centre move in which there wasn't a time-out after the hop from 63.236.3.130.

    Is anyone else seeing this too?

    Thanks.

  7. #327
    Community Member Nyata's Avatar
    Join Date
    Aug 2013
    Posts
    974

    Default

    Quote Originally Posted by blerkington View Post
    Hi,

    Just got my first tracert result since the data centre move in which there wasn't a time-out after the hop from 63.236.3.130.

    Is anyone else seeing this too?

    Thanks.
    #, Country, Town, Lat, Lon, IP, Hostname, Latency (ms), DNS Lookup (ms), Distance to previous node (km), Whois
    [...]
    7, Germany, Frankfurt Am Main, 50.1167, 8.683304, 80.81.194.26, xe-1-2-0.mpr1.fra4.de.above.net, 25, 21, 100, 80.81.194.26
    8, United States, (Unknown), 38.0, -97.0, 64.125.30.254, ae27.cs1.fra9.de.eth.zayo.com, 100, 113, 7834, 64.125.30.254
    9, United States, (Unknown), 38.0, -97.0, 64.125.29.54, ae0.cs1.fra6.de.eth.zayo.com, 100, 112, 0, 64.125.29.54
    10, United States, (Unknown), 38.0, -97.0, 64.125.29.59, ae2.cs1.ams17.nl.eth.zayo.com, 101, 109, 0, 64.125.29.59
    11, United States, (Unknown), 38.0, -97.0, 64.125.29.80, ae0.cs1.ams10.nl.eth.zayo.com, 100, 112, 0, 64.125.29.80
    12, United States, (Unknown), 38.0, -97.0, 64.125.29.77, ae6.cs2.lga5.us.eth.zayo.com, 100, 22, 0, 64.125.29.77
    13, United States, (Unknown), 38.0, -97.0, 64.125.30.253, ae27.cr2.lga5.us.zip.zayo.com, 100, 113, 0, 64.125.30.253
    14, United States, (Unknown), 38.0, -97.0, 64.125.29.37, ae1.cr1.lga5.us.zip.zayo.com, 100, 112, 0, 64.125.29.37
    15, United States, (Unknown), 38.0, -97.0, 64.125.20.14, ae11.mpr3.lga7.us.zip.zayo.com, 100, 109, 0, 64.125.20.14
    16, United States, New York, 40.744904, -73.9782, 128.177.168.190, 128.177.168.190.IPYX-072053-ZYO.zip.zayo.com, 100, 20, 1998, 128.177.168.190
    17, United States, New York, 40.714294, -74.006, 216.52.95.73, border1.pc2-bbnet2.nyj001.pnap.net, 121, 258, 4, 216.52.95.73
    18, United States, (Unknown), 38.0, -97.0, 70.42.39.234, turbine-14.border1.nyj001.pnap.net, 100, 21, 1996, 70.42.39.234
    19, *, *, 38.0, -97.0, *, *, 0, 0, 0, *
    20, United States, (Unknown), 38.0, -97.0, 198.252.160.23, (None), 100, 19, 0, 198.252.160.23

    for me the time out is still there, but... uhm... I don't have 63.236.3.130 anymore lol. on further slightly hopeful notes: I am not taking round trips through the ocean any more, and I am not visiting chesterfield (which exhibited some strange behaviors anyway).

    hope that's not a random blip, keeping fingers crossed.

  8. #328
    Community Member
    Join Date
    Jun 2010
    Posts
    3,102

    Default

    Quote Originally Posted by Nyata View Post
    hope that's not a random blip, keeping fingers crossed.
    Hi,

    Yep, I hope so too. Might be a sign of things beginning to improve.

    I haven't been on much today, but after seeing that tracert result I logged on for a while and did some Thunderholme slayers. It was still a bit laggy, but will need some more time in game to tell if things are improving.

    Thanks.

    PS: Below is what it looks like now, coming from the other side of the world. Step 12 was alway a time-out before, first time I've seen the IP address for that step.


    Tracing route to gls.ddo.com
    over a maximum of 30 hops:

    1 <1 ms <1 ms <1 ms
    2 16 ms 15 ms 15 ms
    3 16 ms 15 ms 15 ms
    4 22 ms 24 ms 15 ms
    5 166 ms 167 ms 166 ms 216.156.85.145.ptr.us.xo.net [216.156.85.145]
    6 169 ms 166 ms 167 ms 207.88.13.226.ptr.us.xo.net [207.88.13.226]
    7 169 ms 166 ms 168 ms 207.88.13.225.ptr.us.xo.net [207.88.13.225]
    8 166 ms 168 ms 166 ms pax-brdr-01.inet.qwest.net [63.146.26.177]
    9 232 ms 229 ms 230 ms ewr-cntr-11.inet.qwest.net [205.171.17.2]
    10 232 ms 233 ms 232 ms 206.103.215.50
    11 229 ms 231 ms 236 ms 63.236.3.130
    12 229 ms 230 ms 230 ms 10.192.216.4
    13 231 ms 234 ms 231 ms gls.ddo.com [198.252.160.23]
    14 235 ms 252 ms 251 ms gls.ddo.com [198.252.160.23]

    Trace complete.
    Last edited by blerkington; 03-18-2016 at 06:03 AM.

  9. #329
    Community Member Flavilandile's Avatar
    Join Date
    Aug 2010
    Posts
    0

    Default

    Quote Originally Posted by mna View Post
    Um, what? Do you mean that SCTP would be more correct than DCCP, then? Because I meant that DCCP would be the "correct way", and that even SCTP could be better than what we have now...
    No I mean that SCTP would be bad for the obvious reason that it needs a configured link set up between the two nodes beforehand, not something that can be opened by a client starting.
    Since I don't know DCCP I can't give an educated answer about how it would help ( or not ).

    Quote Originally Posted by mna View Post
    Main use, yes, especially in volume; not sole use, and it's not by any means restricted to that. (Yes, telecommunications is where I've dealt with it too. And besides, PCM was already a lot better than some of the alternatives... I didn't particularly enjoy the V.35 cabling, for example.)
    Did you ever encounter European pin size and American ones ? ( V.35 ). Personally I always went to X.21/V.11 or V.36 every time I could, it made things a lot more simple.


    Quote Originally Posted by Nyata View Post
    #, Country, Town, Lat, Lon, IP, Hostname, Latency (ms), DNS Lookup (ms), Distance to previous node (km), Whois
    [...]
    7, Germany, Frankfurt Am Main, 50.1167, 8.683304, 80.81.194.26, xe-1-2-0.mpr1.fra4.de.above.net, 25, 21, 100, 80.81.194.26
    8, United States, (Unknown), 38.0, -97.0, 64.125.30.254, ae27.cs1.fra9.de.eth.zayo.com, 100, 113, 7834, 64.125.30.254
    9, United States, (Unknown), 38.0, -97.0, 64.125.29.54, ae0.cs1.fra6.de.eth.zayo.com, 100, 112, 0, 64.125.29.54
    10, United States, (Unknown), 38.0, -97.0, 64.125.29.59, ae2.cs1.ams17.nl.eth.zayo.com, 101, 109, 0, 64.125.29.59
    11, United States, (Unknown), 38.0, -97.0, 64.125.29.80, ae0.cs1.ams10.nl.eth.zayo.com, 100, 112, 0, 64.125.29.80
    12, United States, (Unknown), 38.0, -97.0, 64.125.29.77, ae6.cs2.lga5.us.eth.zayo.com, 100, 22, 0, 64.125.29.77
    13, United States, (Unknown), 38.0, -97.0, 64.125.30.253, ae27.cr2.lga5.us.zip.zayo.com, 100, 113, 0, 64.125.30.253
    14, United States, (Unknown), 38.0, -97.0, 64.125.29.37, ae1.cr1.lga5.us.zip.zayo.com, 100, 112, 0, 64.125.29.37
    15, United States, (Unknown), 38.0, -97.0, 64.125.20.14, ae11.mpr3.lga7.us.zip.zayo.com, 100, 109, 0, 64.125.20.14
    16, United States, New York, 40.744904, -73.9782, 128.177.168.190, 128.177.168.190.IPYX-072053-ZYO.zip.zayo.com, 100, 20, 1998, 128.177.168.190
    17, United States, New York, 40.714294, -74.006, 216.52.95.73, border1.pc2-bbnet2.nyj001.pnap.net, 121, 258, 4, 216.52.95.73
    18, United States, (Unknown), 38.0, -97.0, 70.42.39.234, turbine-14.border1.nyj001.pnap.net, 100, 21, 1996, 70.42.39.234
    19, *, *, 38.0, -97.0, *, *, 0, 0, 0, *
    20, United States, (Unknown), 38.0, -97.0, 198.252.160.23, (None), 100, 19, 0, 198.252.160.23
    Obvious is obvious. You don't need a nifty tool to locate routers.

    xe-1-2-0.mpr1.fra4.de.above.net : Frankfurt, Germany. fra4.de is the giveaway : de for Germany, fra is the IATA Code for Frankfurt Flugshafen
    ae2.cs1.ams17.nl.eth.zayo.com : Amsterdam, Holland. ams17.nl is the giveaway : nl for Holland, ams is Amsterdam Schipol IATA Code.
    ae6.cs2.lga5.us.eth.zayo.com : New York, USA. lga5.us is the giveaway : us for USA, lga stands for La Guardia, one of the New York Airport.
    On G-Land : Flavilandile, Blacklock, Yaelle, Millishande, Larilandile, Gildalinde, Tenalafel, and many other...

  10. #330
    Community Member Holymunchkin's Avatar
    Join Date
    Jun 2011
    Posts
    322

    Default

    Quote Originally Posted by IronClan View Post
    stuff....
    Here's the thing right...

    The people experiencing this game-breaking lag are leaving. The people who aren't are staying.
    The people who aren't obviously don't run 12-man raids on LE or EE.
    This game has been broken for that subsection for over 2 years now.
    People reach this height and then make one or two decision...

    ...I'll bear with it...

    OR

    I'm out---off to greener pastures

    I've canceled my sub personally. Turbine has a month to make LE shroud playable for me or I won't bother with this game.

  11. #331
    Executive Producer Severlin's Avatar
    Join Date
    Apr 2014
    Posts
    0

    Default

    So this morning we did optimization work to clean up network pathing problems and open up new network paths that should have alleviate, if not solve, lag related to network pathing. Thank you for all the trace routes. At this point we will be watching tonight to see who is improved, who isn't, and whether or not there is still lag that is related to server performance rather than network pathing.

    Sev~

  12. #332
    Community Member Gauthaag's Avatar
    Join Date
    Sep 2009
    Posts
    1,410

    Default

    Quote Originally Posted by Severlin View Post
    So this morning we did optimization work to clean up network pathing problems and open up new network paths that should have alleviate, if not solve, lag related to network pathing. Thank you for all the trace routes. At this point we will be watching tonight to see who is improved, who isn't, and whether or not there is still lag that is related to server performance rather than network pathing.

    Sev~
    cool, thanks for information
    Quote Originally Posted by Coyopa View Post
    As far as Gauthaag goes, don't let him get you riled up. This is what he does. Constantly.

  13. #333
    Community Member
    Join Date
    Nov 2010
    Posts
    5

    Default

    Gah! Just ran an LN Shroud yesterday to try and just get a completion despite the lag. I'll make sure and run HoX, TS, and DoJ tonight to see if I notice any changes on my end.

  14. #334
    Intergalactic Space Crusader
    Treasure Hunter
    Livmo's Avatar
    Join Date
    Apr 2013
    Posts
    0

    Default Yay!

    Quote Originally Posted by Severlin View Post
    So this morning we did optimization work to clean up network pathing problems and open up new network paths that should have alleviate, if not solve, lag related to network pathing. Thank you for all the trace routes. At this point we will be watching tonight to see who is improved, who isn't, and whether or not there is still lag that is related to server performance rather than network pathing.

    Sev~
    Thanks for doing this!

    I hope it helps.

    Happy Friday!

  15. #335
    Community Member Elsbet's Avatar
    Join Date
    Mar 2007
    Posts
    802

    Default

    Quote Originally Posted by Severlin View Post
    So this morning we did optimization work to clean up network pathing problems and open up new network paths that should have alleviate, if not solve, lag related to network pathing. Thank you for all the trace routes. At this point we will be watching tonight to see who is improved, who isn't, and whether or not there is still lag that is related to server performance rather than network pathing.

    Sev~
    Thank you! This merits its own post/announcement in service news.

    ~Anaelsbet~; ~Elsbet~; ~Lilabet~; ~Islabet~; ~Phaeddre~
    ~Ascent~

  16. #336
    Community Member
    Join Date
    May 2010
    Posts
    62

    Default

    Did a VoN5 and several single group quests since this fix and it's no better.

    * I should mention it's no better for everyone in the raid or group not just me, we're all lagging all at the same time.
    Last edited by Varthalos; 03-18-2016 at 01:16 PM.

  17. #337
    Executive Producer Severlin's Avatar
    Join Date
    Apr 2014
    Posts
    0

    Default

    Quote Originally Posted by Varthalos View Post
    Did a VoN5 and several single group quests since this fix and it's no better.

    * I should mention it's no better for everyone in the raid or group not just me, we're all lagging all at the same time.
    What server and what time?

    Sev~

  18. #338
    Community Member Taskmage's Avatar
    Join Date
    Nov 2009
    Posts
    124

    Default

    Quote Originally Posted by Severlin View Post
    So this morning we did optimization work to clean up network pathing problems and open up new network paths that should have alleviate, if not solve, lag related to network pathing. Thank you for all the trace routes. At this point we will be watching tonight to see who is improved, who isn't, and whether or not there is still lag that is related to server performance rather than network pathing.

    Sev~
    Can't see any difference, except perhaps it got worse. We just had three wipes in Leg HoX due to lag. Every time we killed a keeper - lag spike. Every time a portal spawned three Reavers - lag spike.
    Whatever you have changed, it didn't help. At least the effect is below any human-detectable threshold.

    Since you just asked while i posted this: Thelanis, the last hour before i made this post. (no idea what your timezone is).
    Active: Taceus [Rog 20e10, GXMechanic, Thelanis]
    Semi-active: Delig [Art 8, Thelanis] | Denidra [Rog 20e10 GXMechanic, Thelanis]
    Inactive: Ithira Kh'thzar [Pal/FvS 3/6, Thelanis] | Broesel [Bar/Ftr 18/2e3, Thelanis]

  19. #339
    Community Member
    Join Date
    May 2013
    Posts
    1,074

    Default

    Quote Originally Posted by Flavilandile View Post
    No I mean that SCTP would be bad for the obvious reason that it needs a configured link set up between the two nodes beforehand, not something that can be opened by a client starting.
    Since I don't know DCCP I can't give an educated answer about how it would help ( or not ).
    Well yeah - the server needs to be listening first. Just like in TCP, except with a different handshake (4-way) to protect against SYN-flooding.
    Even though the telecom folks often do it the hard and tedious way, that doesn't mean that it's an inherent requirement of the protocol...


    DCCP is supposed to become sort of like improved UDP, and the current draft spec isn't yet "supported" even for testing by much anything except relatively recent Linux and some BSDs.
    No longer completely f2p as of November 2014. Father of a few more DDO players.

  20. #340
    Community Member Baraz's Avatar
    Join Date
    Aug 2013
    Posts
    29

    Default

    The lag had not improved. It took me over 2 minutes from the time I clicked on Valeria Sinderwind until I was finally back in Korthos.

    Ghallanda about five minutes prior to my post.

Page 17 of 33 FirstFirst ... 713141516171819202127 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

This form's session has expired. You need to reload the page.

Reload