1. muellerto's Avatar
    Apart from previous posts I'm still experimenting with the Push option in my e-mail accounts.

    I got some help from my provider, especially I know now for sure that their IMAP server indeed supports the IMAP IDLE feature, which is not mentioned on their website, and we tried together to figure this out.

    BTW: One major problem was to distinguish between "idling" and polling - BB10 allows both in parallel ! This was a new finding for me and my bad results (high data traffic) came probably because my BB was both "idling" and polling a lot at the same time. (I guess this is something many BB users have because nobody tells you that this will indeed happen if you don't pay attention.) But if "idling" effectively works you probably don't need polling at all, so you must set polling to "Manually" while Push is on. I have this for two days now and if it works it works very well with a delay of under 30s and a data traffic of only a few kB per hour (150kB per day).

    But - now my issue - sometimes I'm not sure if my BB is indeed "idling". I guess he falls asleep from time to time. Especially if I switch off the data connection over night I believe the "idling" process doesn't restart by himself in the morning and even after hours the device will not get pushed messages. What to do then? Do I always have to reboot to start this feature? This would mean I shouldn't turn off the network connection anymore.
    05-14-14 02:29 AM
  2. Omnitech's Avatar
    I should write a FAQ on this, I've answered it enough times already..

    IMAP DOES require polling, even with "IDLE" active, because IDLE only notifies of new messages. The regular polling cycle is responsible for reconciling other changes, such as messages moves and deletions, to the various endpoints.

    IDLE is also dependent on network conditions. In order for this to work, it has to establish a long-term open TCP socket so it can receive immediate notification of any changes occurring on the server side. (Normally with the usual stateful firewalls and NAT devices in the network path, a server cannot easily initiate an inbound connection to a host on a private network. So the host on the private network (your BlackBerry in this case) initiates an outgoing connection to the server, and keeps that connection "alive" for a long time period. (Typically this lasts around 30 minutes or so, whereupon the remote device lets the connection close and then re-establishes another one)

    Now here's the rub: the thing about establishing a long-term open TCP connection is that certain network elements are unfriendly to this. Common example: inexpensive home network equipment. There are 2 challenges to overcome when supporting this sort of traffic:

    1) It's more costly to build routers and firewalls with enough buffer memory to maintain long-term open sessions without starving other packet streams for buffer memory. This is why low-end home network equipment oftentimes will not allow a 30-minute open TCP session - I've seen some that want to close them in 60 seconds.

    2) Allowing long-term open sessions increases the difficulty of protecting from network attacks, and requires more sophisticated measures in the equipment.

    So if you're stuck with something in your network path that is "unfriendly" to these long sessions that IMAP and other "push" internet protocols require, it will prevent them from "pushing" because it will keep closing the link. In such cases, email notifications will revert to the "polling interval".

    Lastly - IMAP is rather "dumb" about this. If something won't allow it to keep a session open for its preferred time (typically 30 min), it will just stop pushing entirely. Microsoft's Exchange ActiveSync (EAS) protocol, on the other hand, has a mechanism where it will start trying to reduce the session time progressively until it finds something that works with the network path in use. (Until it gets down to a few minutes, whereupon it reverts to polling like IMAP.)

    So in a nutshell, if you're having issues like that, it's more likely it will work if you use EAS than if you use IMAP.

    From what you describe it sounds like there may be something in your case that is not re-establishing a long-term open session after a nighttime break, but I haven't seen many complaints about that kind of issue with BB10 so far.
    muellerto likes this.
    05-14-14 03:23 AM
  3. muellerto's Avatar
    IMAP DOES require polling, even with "IDLE" active, because IDLE only notifies of new messages. The regular polling cycle is responsible for reconciling other changes, such as messages moves and deletions, to the various endpoints.
    Yes, sure, no question. But the most interesting thing is the notification that there is a new message. And this alone should not need a poll. The next step would be that I open the Hub to see what's going on. And this should then cause a manual poll. This is how I learned here at CB to understand the "Manually" polling option.
    From what you describe it sounds like there may be something in your case that is not re-establishing a long-term open session after a nighttime break, but I haven't seen many complaints about that kind of issue with BB10 so far.
    It happens even after a reboot that the idle process doesn't run. This is bizarre. Sometimes it runs and sometimes it doesn't. I don't know what this depends on.
    05-14-14 04:01 AM
  4. muellerto's Avatar
    It very well might, if something in the network path is terminating the IMAP IDLE session prematurely.

    For example, most people would never know about a transient WiFi connectivity loss that forces the device to switch to cellular data temporarily. This kind of thing happens constantly for many users. In such a case, the open IMAP IDLE session would terminate.
    So what polling interval should the user select if "idling" via the Push option normally works? And is this polling executed though it is not needed?
    05-14-14 05:10 AM
  5. Omnitech's Avatar
    So what polling interval should the user select if "idling" via the Push option normally works? And is this polling executed though it is not needed?
    I don't know what you're asking. I'm not suggesting changing any polling interval. I'm simply explaining some of the ways that IMAP IDLE can fail.

    The usual solution is replace lousy home network equipment.

    Actually one of the Canadian carriers had their cellular network configured in a push-unfriendly way shortly after BB10 launched. They corrected it fairly quickly though.

    Easy test: if you turn off WiFi for a few days and still have the same issue, it's not likely that it's a network issue as explained above.

    Who is your carrier?
    05-14-14 05:53 AM
  6. muellerto's Avatar
    Easy test: if you turn off WiFi for a few days and still have the same issue, it's not likely that it's a network issue as explained above.
    Because I'm a long time of the day out of any wifi my wifi is always off, except I am on a hotspot and I need it. But, sure, switching wifi on and off should also cut existing connections, in both cases. Would be interesting what then happens.
    Who is your carrier?
    Swisscom. The biggest provider in Switzerland, the former public enterprise. Swisscom has normally not a bad network at all, especially in the cities.

    I don't think it's a network issue in my case. As I said before it works properly and absolutely as expected for a long time (even with polling on "Manually"). The network looks stable. Yesterday in the evening I switched off the data network connection, this was the manifest reason why the "idling" stopped. And this morning it didn't start anymore. My guess is a silly issue on the device. The event that the network connection is back is not handled correctly, or so. Sure this depends also on the software version, 10.2.1.2102, perhaps it is already fixed. - I was also already told by somebody here to reboot after making changes in an e-mail account because those changes are not always recognized by the Hub (or the PIM services underneath) and maybe they will have no effect, and this is true, I've also seen this, yet. This email account configuration is over all a bit fractious.
    05-14-14 06:29 AM
  7. muellerto's Avatar
    Yes, but that infrastructure is still there and still being used by the 60+ million subscribers at the last count.
    Friends, I posted my opinion about this several times, but BIS was for now not at all the topic of my thread. My thread is about what we have now and I want to use it right, this means I want to bail out the technichal capabilities.
    05-14-14 06:44 AM
  8. belfastdispatcher's Avatar
    Friends, I posted my opinion about this several times, but BIS was for now not at all the topic of my thread. My thread is about what we have now and I want to use it right, this means I want to bail out the technichal capabilities.
    What have now is basically as good as it gets, and even EAS will have similar issues.


    #believeinfilm
    05-14-14 06:55 AM
  9. Richard Buckley's Avatar
    It is difficult to separate each client connection from an IMAP server in the log when you have (as I do) several desktop clients and several mobile clients all connecting to one account. I will be able to do this for after work today because I won't be home until late. But here is a sample of what I saw from my Z10 yesterday. Notice there are some very long standing IMAP connections.

    I'm not sure I understand where Omnitech is comming from with the idea that TCP connections can't last longer than 30 minutes. We have existing TCP connections used for database synchronization that have been up for months through cheap carrier provided DSL routers. A mobile device is somewhat different because it can change network access points without warning. But as you can see from the attached picture, when one isn't mobile for a few hours the connections can endure. And when they break they are often re-established in seconds when a communications path still exists.


    BB 10 email push option-imap_idle.png
    05-14-14 07:00 AM
  10. belfastdispatcher's Avatar
    It is difficult to separate each client connection from an IMAP server in the log when you have (as I do) several desktop clients and several mobile clients all connecting to one account. I will be able to do this for after work today because I won't be home until late. But here is a sample of what I saw from my Z10 yesterday. Notice there are some very long standing IMAP connections.

    I'm not sure I understand where Omnitech is comming from with the idea that TCP connections can't last longer than 30 minutes. We have existing TCP connections used for database synchronization that have been up for months through cheap carrier provided DSL routers. A mobile device is somewhat different because it can change network access points without warning. But as you can see from the attached picture, when one isn't mobile for a few hours the connections can endure. And when they break they are often re-established in seconds when a communications path still exists.


    Click image for larger version. 

Name:	IMAP_IDLE.PNG 
Views:	1144 
Size:	22.2 KB 
ID:	270458
    That's what I've always said, if you spend all day in an office you'll never notice problems, or very rarely.

    If you're always on the move however, it becomes a big big problem. Also if you find yourself often on 2G.

    When I first got the Z10 I ran both in parallel with a 9900 and the difference in email delivery was huge, sometimes 30 minutes delay on the Z10.


    #believeinfilm
    05-14-14 07:14 AM
  11. Ecm's Avatar
    Closed for review
    05-14-14 08:24 AM
  12. Ecm's Avatar
    Re-opened.

    [INFO]I have tweaked the thread title to clarify the fact that the OP's thread is about BlackBerry 10 push email. The off-topic references have been deleted.

    This is not to devolve into another discussion about BB OS BIS email vs. BB 10 email. If you wish to join that debate, please join one of the existing threads.

    Please keep on topic![/INFO]

    Elessar.cm / Moderator.
    Omnitech likes this.
    05-14-14 08:35 AM
  13. Omnitech's Avatar
    I'm not sure I understand where Omnitech is comming from with the idea that TCP connections can't last longer than 30 minutes. We have existing TCP connections used for database synchronization that have been up for months through cheap carrier provided DSL routers.
    All stateful firewalls (which includes most common NAT devices, almost all modern residential networking equipment and most commercial layer-3 networking equipment) have a timeout period beyond which it will close any open TCP "sessions". (From the OS perspective you might call this a "socket")

    A "session" is defined as a TCP handshake along the lines of:

    1) Hello, are you there? (Endpoint)
    2) Yes I am, what can I do for you? (Server, the "Bartender")
    3) I'd like to open a "TCP bar tab", please.
    4) No problem, tab opened, let me know when you'd like me to cash out your tab.
    5) [waits for data and "close request"]
    6) OK, I'm ready to close that tab and terminate the session.
    7) Acknowledged, tab closed. (Bartender)
    8) Hey, can I open another tab?? (Endpoint)
    [...]

    If the session sits in state #5 for longer than the session timeout period and no data is sent or received, the firewall will time out and tear down that session/socket and no more data can be sent over it. If it did not do this, it would quickly exhaust its packet buffers keeping all these "zombie" sessions open whenever packets were lost in transit or for various other reasons.

    The reason your db sessions stay up for a long time is either because enough data passes over that session/socket to keep it open, or one or both sides are sending keepalive or "heartbeat" packets (this was the mechanism underlying the infamous "Heartbleed" OpenSSL flaw), or because the protocol you are using at a higher OSI layer isn't relying on a single long-lived TCP session to maintain communications anyway.

    IMAP relies on this mechanism in part because it circumvents the common problem of an external host trying to initiate communications with a protected host behind a firewall or NAT device. Instead of "initiating" a connection to notify the endpoint when new mail arrives (which is usually impossible in the modern internet), it simply sends something back over that already-open, "long-lived" TCP session/socket, circumventing the usual router/firewall/NAT restrictions. This is also how many modern "firewall friendly" remote access products like "LogMeIn" or "TeamViewer" work. (Though rather than longterm open sockets they are probably sending small but consistent data - ie "keepalives" or status packets across the link that was initiated by the endpoint behind the firewall.)
    05-14-14 08:56 AM
  14. Richard Buckley's Avatar
    That's what I've always said, if you spend all day in an office you'll never notice problems, or very rarely.

    If you're always on the move however, it becomes a big big problem. Also if you find yourself often on 2G.

    When I first got the Z10 I ran both in parallel with a 9900 and the difference in email delivery was huge, sometimes 30 minutes delay on the Z10.


    #believeinfilm
    I left the office at 1505 yesterday. Went home (an 80 km commute). Had supper. Went to Tim Hortons (13 km from home), logged into their Wi-Fi which automatically enabled my VPN. Left Tim Hortons to go to a local community group meeting. Then went home. The short connectivity periods correspond to changes in service. TCP can't maintain a connection when you change bearers.

    When I first got my Z10 I also ran comparisons with my 9810 on our development BES and a 9700 on BIS. I found no significant difference between IMAP services to the three devices.

    If you're trying to work on crappy networking, then yes I can see the benefit to having BlackBerry put a server in the carrier's mobile network and maintain a decent network from the NOC to that machine will solve problems. It just isn't an economically scalable solution. And it ends up being a very expensive subsidy for people with access to decent data to pay for people without.
    05-14-14 09:27 AM
  15. Richard Buckley's Avatar
    All stateful firewalls (which includes most common NAT devices, almost all modern residential networking equipment and most commercial layer-3 networking equipment) have a timeout period beyond which it will close any open TCP "sessions". (From the OS perspective you might call this a "socket")

    A "session" is defined as a TCP handshake along the lines of:

    1) Hello, are you there? (Endpoint)
    2) Yes I am, what can I do for you? (Server, the "Bartender")
    3) I'd like to open a "TCP bar tab", please.
    4) No problem, tab opened, let me know when you'd like me to cash out your tab.
    5) [waits for data and "close request"]
    6) OK, I'm ready to close that tab and terminate the session.
    7) Acknowledged, tab closed. (Bartender)
    8) Hey, can I open another tab?? (Endpoint)
    [...]

    If the session sits in state #5 for longer than the session timeout period and no data is sent or received, the firewall will time out and tear down that session/socket and no more data can be sent over it. If it did not do this, it would quickly exhaust its packet buffers keeping all these "zombie" sessions open whenever packets were lost in transit or for various other reasons.
    No packet buffers are required to keep a TCP connection alive. Not at the end points and not at NAT devices in between. Only the address/port association data. Which is finite and limited but not to the extent that packet buffers are.

    The reason your db sessions stay up for a long time is either because enough data passes over that session/socket to keep it open, or one or both sides are sending keepalive or "heartbeat" packets (this was the mechanism underlying the infamous "Heartbleed" OpenSSL flaw), or because the protocol you are using at a higher OSI layer isn't relying on a single long-lived TCP session to maintain communications anyway.
    True, except the Heartbleed bug was in code supporting DTLS and not needed for TLS over TCP. TCP has its own keepalive protocol.

    IMAP relies on this mechanism in part because it circumvents the common problem of an external host trying to initiate communications with a protected host behind a firewall or NAT device. Instead of "initiating" a connection to notify the endpoint when new mail arrives (which is usually impossible in the modern internet), it simply sends something back over that already-open, "long-lived" TCP session/socket, circumventing the usual router/firewall/NAT restrictions. This is also how many modern "firewall friendly" remote access products like "LogMeIn" or "TeamViewer" work. (Though rather than longterm open sockets they are probably sending small but consistent data - ie "keepalives" or status packets across the link that was initiated by the endpoint behind the firewall.)
    True, but it is useful even for systems which are not behind NAT filters but have dynamic IP addresses.

    Depending on the frequency of data sent back and forth, keeping a TCP connection active for long periods by exchanging TCP keepalive messages (set by enabling the option on the socket when creating the connection) can be much more efficient than setting up and tearing down the connection at regular intervals. This is especially true when you add TLS session negotiation on top of that.
    05-14-14 09:46 AM
  16. belfastdispatcher's Avatar
    Why is somebody keep deleting posts? They're relevant to the discussion.


    #believeinfilm
    05-14-14 09:47 AM
  17. Ecm's Avatar
    Why is somebody keep deleting posts? They're relevant to the discussion.


    #believeinfilm
    I clearly stated that the topic of this thread is BlackBerry 10 email - not BIS. Bringing BIS back into the thread is off-topic and will continue to be deleted, at the least. There are many threads where you can debate the virtues of BIS, but this isn't one of them.
    southlander and ppeters914 like this.
    05-14-14 09:53 AM
  18. Omnitech's Avatar
    No packet buffers are required to keep a TCP connection alive. Not at the end points and not at NAT devices in between. Only the address/port association data. Which is finite and limited but not to the extent that packet buffers are.

    I would assume that any device passing network traffic that has created a logical entity to handle a communication session would assign some default number of buffers to handle that communication, or else when packets come in for it, they would be dropped.

    A stateful firewall's ability to handle traffic is often directly implied by its specification for "maximum number of simultaneous sessions" which is a common performance metric for such devices. Low-end devices support a low amount of simultaneous sessions (and typically do not even specify what this limit is), high-end devices support a high amount of simultaneous sessions.



    True, except the Heartbleed bug was in code supporting DTLS and not needed for TLS over TCP. TCP has its own keepalive protocol.

    I know all about that, I wasn't trying to say these things are identical, I was explaining the general concept of a "keepalive". Hopefully you don't personally don't need my "bartender" analogy to understand how a TCP handshake works.
    05-14-14 09:55 AM
  19. belfastdispatcher's Avatar
    I clearly stated that the topic of this thread is BlackBerry 10 email - not BIS. Bringing BIS back into the thread is off-topic and will continue to be deleted, at the least. There are many threads where you can debate the virtues of BIS, but this isn't one of them.
    Yeah but we weren't talking about BIS anymore, we were talking about the hardware BlackBerry has in place with the carriers that could be utilised in BB10 email, so I believe you were wrong to delete the post.

    It's existing infrastructure BB10 could use just like it uses the NOCs to quickly set up emails



    #believeinfilm
    05-14-14 09:56 AM
  20. Ecm's Avatar
    Yeah but we weren't talking about BIS anymore, we were talking about the hardware BlackBerry has in place with the carriers that could be utilised in BB10 email, so I believe you were wrong to delete the post.

    It's existing infrastructure BB10 could use just like it uses the NOCs to quickly set up emails



    #believeinfilm
    BlackBerry could do a number of things, including using NOC. However they are not currently doing so. Until that happens, NOC isn't relevant to the OP's thread or his stated question and is therefore off topic.
    Omnitech likes this.
    05-14-14 10:18 AM
  21. muellerto's Avatar
    Went to Tim Hortons
    Great!
    The short connectivity periods correspond to changes in service. TCP can't maintain a connection when you change bearers.
    But ... what I don't understand is why this should be a problem. If the device gets into a new cell because of physical movement it could easily open new TCP connections and handle the old ones as broken. This is what I also would expect when the user closes his network connection and opens it again. An IMAP server should notice this. And even if the connection is not available for a while the client could try to reopen it again.

    What I meanwhile must negate is the former idea that switching to wifi and returning to cellular would have an impact on my issue. It does not. I can switch to wifi and back but the idle mechanism gets not disturbed, it runs also very well while I'm in wifi and also after I returned to cellular. And I surely assume that the Hub (the PIM services) use wifi.

    It's almost evening now. This night I will leave my data connection on to see if the idle mechanism is still running tomorrow morning.
    05-14-14 10:58 AM
  22. Richard Buckley's Avatar
    Great!
    But ... what I don't understand is why this should be a problem. If the device gets into a new cell because of physical movement it could easily open new TCP connections and handle the old ones as broken. This is what I also would expect when the user closes his network connection and opens it again. An IMAP server should notice this. And even if the connection is not available for a while the client could try to reopen it again.
    Changing cells won't have an impact, and didn't on me. One of the longest connection periods included my hour and a half commute. Neither does switching to and from Wi-Fi impact my service. This was my point. The technology is able to deal with these issues, but only if it is configured and run properly.
    05-14-14 11:14 AM
  23. Richard Buckley's Avatar
    I would assume that any device passing network traffic that has created a logical entity to handle a communication session would assign some default number of buffers to handle that communication, or else when packets come in for it, they would be dropped.
    That is a bad assumption. If a packet arrives at a node and there is no place to save it the packet is silently discarded. This is an important function. It is the discarding of packets that communicates to the originator that the network is reaching congestion. If the network is congested, then it is likely that any reply saying "hey, slow down" is going to be lost. When a packet is discarded the originator doesn't receive an acknowledgement. If the acknowledgement is itself lost the originator doesn't receive it. In either case the originator goes through a back-off re-send procedure. If is a strange concept to wrap one's head around but that is how TCP/IP works, and always has. In fact having too much buffer storage for the available bandwidth only makes things worse because packets hang around in buffers too long and the endpoints can't adapt to the available bandwidth. But I digress.

    The point is, the NAT traversal state is one finite resource that is comparatively small on a per session basis. Packet buffers are a different resource. The maximum size of an IP packet is 64K. Yes the Ethernet MTU is 1500 but routers don't route Ethernet. It get complicated very quickly, but if one wanted to allocate a few packet buffers per session just in case they might be needed a router could easily end up reserving most of its memory and never using it.

    To put this in some perspective the Cisco Small Business RV180 VPN router I have on my desk has the following performance specifications:

    NAT Throughput 800 Mbps
    Concurrent Sessions 12,000

    A stateful firewall's ability to handle traffic is often directly implied by its specification for "maximum number of simultaneous sessions" which is a common performance metric for such devices. Low-end devices support a low amount of simultaneous sessions (and typically do not even specify what this limit is), high-end devices support a high amount of simultaneous sessions.
    Yes, but that is not determined by packet buffer space.



    I know all about that, I wasn't trying to say these things are identical, I was explaining the general concept of a "keepalive". Hopefully you don't personally don't need my "bartender" analogy to understand how a TCP handshake works.
    Um, no.
    Last edited by Richard Buckley; 05-14-14 at 01:22 PM. Reason: Add some concrete data.
    05-14-14 11:32 AM
  24. Omnitech's Avatar
    That is a bad assumption. If a packet arrives at a node and there is no place to save it the packet is silently discarded. This is an important function. It is the discarding of packets that communicates to the originator that the network is reaching congestion. If the network is congested, then it is likely that any reply saying "hey, slow down" is going to be lost. When a packet is discarded the originator doesn't receive an acknowledgement. If the acknowledgement is itself lost the originator doesn't receive it. In either case the originator goes through a back-off re-send procedure. If is a strange concept to wrap one's head around but that is how TCP/IP works, and always has. In fact having too much buffer storage for the available bandwidth only makes things worse because packets hang around in buffers too long and the endpoints can't adapt to the available bandwidth. But I digress.

    "Digress" is not exactly how I'd put it.

    If a network device like a router or firewall drops packets willy-nilly simply because it's so poorly designed that it doesn't allocate buffers to receive packets when the resources are otherwise perfectly available (ie when traffic is idle), then it's a broken device. There is no need to discuss the general need for congestion management when the device in question is not congested, just broken.



    The point is, the NAT traversal state is one finite resource that is comparatively small on a per session basis. Packet buffers are a different resource. The maximum size of an IP packet is 64K. Yes the Ethernet MTU is 1500 but routers don't route Ethernet. It get complicated very quickly, but if one wanted to allocate a few packet buffers per session just in case they might be needed a router could easily end up reserving most of its memory and never using it.
    We are not talking about buffers being allocated that "might be needed" - we are talking about an established TCP session which was established for the sole purpose of communicating something. This is why session timeouts are set - resource management in case the established session becomes idle. On the firewalls that I most commonly use I can set whatever customized session timeout I want for every individual protocol that traverses the device, though the default is generally about 30 minutes. (There are various other resource-management methods available as well, ie source IP or destination IP-based session limits, QoS controls, etc etc.)

    In the old days before stateful firewalls and NAT devices that had to maintain sessions or state for every data stream that passed through them were common, that kind of resource management wasn't an issue. Then again, it was a real PITA to pass FTP traffic or protocols with indeterminate source/destination port #s without opening up the device to the whole world, too.
    05-14-14 08:24 PM
  25. Richard Buckley's Avatar
    "Digress" is not exactly how I'd put it.

    If a network device like a router or firewall drops packets willy-nilly simply because it's so poorly designed that it doesn't allocate buffers to receive packets when the resources are otherwise perfectly available (ie when traffic is idle), then it's a broken device. There is no need to discuss the general need for congestion management when the device in question is not congested, just broken.





    We are not talking about buffers being allocated that "might be needed" - we are talking about an established TCP session which was established for the sole purpose of communicating something. This is why session timeouts are set - resource management in case the established session becomes idle. On the firewalls that I most commonly use I can set whatever customized session timeout I want for every individual protocol that traverses the device, though the default is generally about 30 minutes. (There are various other resource-management methods available as well, ie source IP or destination IP-based session limits, QoS controls, etc etc.)

    In the old days before stateful firewalls and NAT devices that had to maintain sessions or state for every data stream that passed through them were common, that kind of resource management wasn't an issue. Then again, it was a real PITA to pass FTP traffic or protocols with indeterminate source/destination port #s without opening up the device to the whole world, too.
    Not broken, that is how TCP is designed to work. If you don't believe me i can't help it, but you should study the protocol design before you pass judgement. Remember firewalls, statefull or other wise, are a slim minority of routers. IP routing is designed to be stateless. Each packet has all the information needed for it to be routed from source to destinatuon. It is only when we want to inderdict traffic, or obscure addressing that we need to keep state in routers.

    There is no natural reason to assume a TCP connection, or a UDP association would be as short as 30 minutes. I suspect you are taking about inactive time before the state is discarded. If not, then that would be exactly the poor network management decision that would cause problems for IMAP IDLE and other protocols buy breaking the connection early, even though it is being kept alive. As you can see from my earlier post my Z10 was able to keep sessions active for hours. That is what is needed to provide responsive IMAP push.

    Nor am I talking about discarding packets willy-nilly, but in a very considered way. There are actually algorithms for deciding which packets in buffer should be discarded to provide better through put for all active sessions.

    Posted via CB10
    Last edited by Richard Buckley; 05-14-14 at 10:42 PM.
    05-14-14 10:28 PM
71 123

Similar Threads

  1. How's your z30 radio holding up on 10.3.0.296hybrid
    By sa385 in forum BlackBerry 10 OS
    Replies: 22
    Last Post: 06-07-14, 08:19 PM
  2. Q10 In CAR Bluetooth Issue - OS 10.2.1.2156
    By divitra in forum BlackBerry Q10
    Replies: 3
    Last Post: 05-14-14, 08:44 AM
  3. Implementation of 'hot corners' on 10.3 and above
    By amcfarla87 in forum BlackBerry 10 OS
    Replies: 2
    Last Post: 05-14-14, 07:02 AM
  4. 10.3.0.296 for Q5 plz
    By moegh in forum BlackBerry 10 OS
    Replies: 1
    Last Post: 05-14-14, 06:53 AM
  5. OS 10.3 on My Q5
    By Alviansyah Saidina Muhammad in forum BlackBerry 10 OS
    Replies: 1
    Last Post: 05-14-14, 06:38 AM
LINK TO POST COPIED TO CLIPBOARD