aboutsummaryrefslogtreecommitdiff
path: root/include/linux/skbuff.h
AgeCommit message (Collapse)Author
2008-12-15net: Add skb_gro_receiveHerbert Xu
This patch adds the helper skb_gro_receive to merge packets for GRO. The current method is to allocate a new header skb and then chain the original packets to its frag_list. This is done to make it easier to integrate into the existing GSO framework. In future as GSO is moved into the drivers, we can undo this and simply chain the original packets together. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-11-24tcp: Try to restore large SKBs while SACK processingIlpo Järvinen
During SACK processing, most of the benefits of TSO are eaten by the SACK blocks that one-by-one fragment SKBs to MSS sized chunks. Then we're in problems when cleanup work for them has to be done when a large cumulative ACK comes. Try to return back to pre-split state already while more and more SACK info gets discovered by combining newly discovered SACK areas with the previous skb if that's SACKed as well. This approach has a number of benefits: 1) The processing overhead is spread more equally over the RTT 2) Write queue has less skbs to process (affect everything which has to walk in the queue past the sacked areas) 3) Write queue is consistent whole the time, so no other parts of TCP has to be aware of this (this was not the case with some other approach that was, well, quite intrusive all around). 4) Clean_rtx_queue can release most of the pages using single put_page instead of previous PAGE_SIZE/mss+1 calls In case a hole is fully filled by the new SACK block, we attempt to combine the next skb too which allows construction of skbs that are even larger than what tso split them to and it handles hole per on every nth patterns that often occur during slow start overshoot pretty nicely. Though this to be really useful also a retransmission would have to get lost since cumulative ACKs advance one hole at a time in the most typical case. TODO: handle upwards only merging. That should be rather easy when segment is fully sacked but I'm leaving that as future work item (it won't make very large difference anyway since this current approach already covers quite a lot of normal cases). I was earlier thinking of some sophisticated way of tracking timestamps of the first and the last segment but later on realized that it won't be that necessary at all to store the timestamp of the last segment. The cases that can occur are basically either: 1) ambiguous => no sensible measurement can be taken anyway 2) non-ambiguous is due to reordering => having the timestamp of the last segment there is just skewing things more off than does some good since the ack got triggered by one of the holes (besides some substle issues that would make determining right hole/skb even harder problem). Anyway, it has nothing to do with this change then. I choose to route some abnormal looking cases with goto noop, some could be handled differently (eg., by stopping the walking at that skb but again). In general, they either shouldn't happen at all or are rare enough to make no difference in practice. In theory this change (as whole) could cause some macroscale regression (global) because of cache misses that are taken over the round-trip time but it gets very likely better because of much less (local) cache misses per other write queue walkers and the big recovery clearing cumulative ack. Worth to note that these benefits would be very easy to get also without TSO/GSO being on as long as the data is in pages so that we can merge them. Currently I won't let that happen because DSACK splitting at fragment that would mess up pcounts due to sk_can_gso in tcp_set_skb_tso_segs. Once DSACKs fragments gets avoided, we have some conditions that can be made less strict. TODO: I will probably have to convert the excessive pointer passing to struct sacktag_state... :-) My testing revealed that considerable amount of skbs couldn't be shifted because they were cloned (most likely still awaiting tx reclaim)... [The rest is considering future work instead since I got repeatably EFAULT to tcpdump's recvfrom when I added pskb_expand_head to deal with clones, so I separated that into another, later patch] ...To counter that, I gave up on the fifth advantage: 5) When growing previous SACK block, less allocs for new skbs are done, basically a new alloc is needed only when new hole is detected and when the previous skb runs out of frags space ...which now only happens of if reclaim is fast enough to dispose the clone before the SACK block comes in (the window is RTT long), otherwise we'll have to alloc some. With clones being handled I got these numbers (will be somewhat worse without that), taken with fine-grained mibs: TCPSackShifted 398 TCPSackMerged 877 TCPSackShiftFallback 320 TCPSACKCOLLAPSEFALLBACKGSO 0 TCPSACKCOLLAPSEFALLBACKSKBBITS 0 TCPSACKCOLLAPSEFALLBACKSKBDATA 0 TCPSACKCOLLAPSEFALLBACKBELOW 0 TCPSACKCOLLAPSEFALLBACKFIRST 1 TCPSACKCOLLAPSEFALLBACKPREVBITS 318 TCPSACKCOLLAPSEFALLBACKMSS 1 TCPSACKCOLLAPSEFALLBACKNOHEAD 0 TCPSACKCOLLAPSEFALLBACKSHIFT 0 TCPSACKCOLLAPSENOOPSEQ 0 TCPSACKCOLLAPSENOOPSMALLPCOUNT 0 TCPSACKCOLLAPSENOOPSMALLLEN 0 TCPSACKCOLLAPSEHOLE 12 Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-10-31mac80211: Re-enable aggregationSujith
Wireless HW without any dedicated queues for aggregation do not need the ampdu_queues mechanism present right now in mac80211. Since mac80211 is still incomplete wrt TX MQ changes, do not allow aggregation sessions for drivers that set ampdu_queues. This is only an interim hack until Intel fixes the requeue issue. Signed-off-by: Sujith <Sujith.Manoharan@atheros.com> Signed-off-by: Luis Rodriguez <Luis.Rodriguez@Atheros.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2008-10-28net: reduce structures when XFRM=nAlexey Dobriyan
ifdef out * struct sk_buff::sp (pointer) * struct dst_entry::xfrm (pointer) * struct sock::sk_policy (2 pointers) Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-10-07net: packet split receive apiPeter Zijlstra
Add some packet-split receive hooks. For one this allows to do NUMA node affine page allocs. Later on these hooks will be extended to do emergency reserve allocations for fragments. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-10-01net: add skb_recycle_check() to enable netdriver skb recyclingLennert Buytenhek
This patch adds skb_recycle_check(), which can be used by a network driver after transmitting an skb to check whether this skb can be recycled as a receive buffer. skb_recycle_check() checks that the skb is not shared or cloned, and that it is linear and its head portion large enough (as determined by the driver) to be recycled as a receive buffer. If these conditions are met, it does any necessary reference count dropping and cleans up the skbuff as if it just came from __alloc_skb(). Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-09-23net: Add skb_queue_walk_from() and skb_queue_walk_from_safe().David S. Miller
These will be used by TCP write queue handling and elsewhere. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-09-23net: Add skb_queue_next().David S. Miller
A lot of code wants to iterate over an SKB queue at the top level using it's own control structure and iterator scheme. Provide skb_queue_next(), which is only valid to invoke if skb_queue_is_last() returns false. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-09-23net: Add skb_queue_is_last().David S. Miller
Several bits of code want to know "is this the last SKB in a queue", and all of them implement this by hand. Provide an common interface to make this check. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-09-22net: Fix bus in SKB queue splicing interfaces.David S. Miller
Handle the case of head being non-empty, by adding list->qlen to head->qlen instead of using direct assignment. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-09-21net: Add new interfaces for SKB list light-weight init and splicing.David S. Miller
This will be used by subsequent changesets. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-09-11net: Add SKB DMA mapping helper functions.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-09-11net: Add DMA mapping tokens to skb_shared_info.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-15net: skb_copy_datagram_from_iovec()Rusty Russell
There's an skb_copy_datagram_iovec() to copy out of a paged skb, but nothing the other way around (because we don't do that). We want to allocate big skbs in tun.c, so let's add the function. It's a carbon copy of skb_copy_datagram_iovec() with enough changes to be annoying. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-08-11skbuff: Code readability NiTGerrit Renker
Inserting a space between the `-' improved the C readability (some languages allow hyphens within functions and variable names, which is confusing). Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-31skbuff: add missing kernel-doc for do_not_encryptRandy Dunlap
Add missing kernel-doc notation to sk_buff: Warning(linux-2.6.27-rc1-git2//include/linux/skbuff.h:345): No description found for parameter 'do_not_encrypt' Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-29mac80211: partially fix skb->cb useJohannes Berg
This patch fixes mac80211 to not use the skb->cb over the queue step from virtual interfaces to the master. The patch also, for now, disables aggregation because that would still require requeuing, will fix that in a separate patch. There are two other places (software requeue and powersaving stations) where requeue can happen, but that is not currently used by any drivers/not possible to use respectively. Signed-off-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2008-07-14vlan: Don't store VLAN tag in cbPatrick McHardy
Use a real skb member to store the skb to avoid clashes with qdiscs, which are allowed to use the cb area themselves. As currently only real devices that consume the skb set the NETIF_F_HW_VLAN_TX flag, no explicit invalidation is neccessary. The new member fills a hole on 64 bit, the skb layout changes from: __u32 mark; /* 172 4 */ sk_buff_data_t transport_header; /* 176 4 */ sk_buff_data_t network_header; /* 180 4 */ sk_buff_data_t mac_header; /* 184 4 */ sk_buff_data_t tail; /* 188 4 */ /* --- cacheline 3 boundary (192 bytes) --- */ sk_buff_data_t end; /* 192 4 */ /* XXX 4 bytes hole, try to pack */ to __u32 mark; /* 172 4 */ __u16 vlan_tci; /* 176 2 */ /* XXX 2 bytes hole, try to pack */ sk_buff_data_t transport_header; /* 180 4 */ sk_buff_data_t network_header; /* 184 4 */ Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-07-08net: Delete NETDEVICES_MULTIQUEUE kconfig option.David S. Miller
Multiple TX queue support is a core networking feature. Signed-off-by: David S. Miller <davem@davemloft.net>
2008-06-19net: Discard and warn about LRO'd skbs received for forwardingBen Hutchings
Add skb_warn_if_lro() to test whether an skb was received with LRO and warn if so. Change br_forward(), ip_forward() and ip6_forward() to call it) and discard the skb if it returns true. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-21skbuff: fix missing kernel-doc notationRandy Dunlap
Add kernel-doc notation for ndisc_nodetype: Warning(linux-2.6.25-git2//include/linux/skbuff.h:340): No description found for parameter 'ndisc_nodetype' Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-14[SKB]: __skb_queue_tail = __skb_insert beforeGerrit Renker
This expresses __skb_queue_tail() in terms of __skb_insert(), using __skb_insert_before() as auxiliary function. Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-14[SKB]: __skb_append = __skb_queue_after Gerrit Renker
This expresses __skb_append in terms of __skb_queue_after, exploiting that __skb_append(old, new, list) = __skb_queue_after(list, old, new). Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-14[SKB]: __skb_queue_after(prev) = __skb_insert(prev, prev->next)Gerrit Renker
By reordering, __skb_queue_after() is expressed in terms of __skb_insert(). Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-14[SKB]: __skb_dequeue = skb_peek + __skb_unlinkGerrit Renker
By rearranging the order of declarations, __skb_dequeue() is expressed in terms of * skb_peek() and * __skb_unlink(), thus in effect mirroring the analogue implementation of __skb_dequeue_tail(). Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-04-03[IPV6] NDISC: Don't rely on node-type hint from L2 unless required.YOSHIFUJI Hideaki
Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
2008-04-03[IPV6] SIT: Add PRL management for ISATAP.Templin, Fred L
This patch updates the Linux the Intra-Site Automatic Tunnel Addressing Protocol (ISATAP) implementation. It places the ISATAP potential router list (PRL) in the kernel and adds three new private ioctls for PRL management. [Add several changes of structure name, constant names etc. - yoshfuji] Signed-off-by: Fred L. Templin <fred.l.templin@boeing.com> Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
2008-03-27[NET]: uninline skb_trim, de-bloatsIlpo Järvinen
Allyesconfig (v2.6.24-mm1): -10976 209 funcs, 123 +, 11099 -, diff: -10976 --- skb_trim Without number of debug related CONFIGs (v2.6.25-rc2-mm1): -7360 192 funcs, 131 +, 7491 -, diff: -7360 --- skb_trim skb_trim | +42 Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-27[NET]: uninline skb_push, de-bloats a lotIlpo Järvinen
Allyesconfig (v2.6.24-mm1): -21593 356 funcs, 2418 +, 24011 -, diff: -21593 --- skb_push Without many debug related CONFIGs (v2.6.25-rc2-mm1): -13890 341 funcs, 189 +, 14079 -, diff: -13890 --- skb_push skb_push | +46 Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-27[NET]: uninline dev_alloc_skb, de-bloats a lotIlpo Järvinen
Allyesconfig (v2.6.24-mm1): -23668 392 funcs, 104 +, 23772 -, diff: -23668 --- dev_alloc_skb Without many debug CONFIGs (v2.6.25-rc2-mm1): -12178 382 funcs, 157 +, 12335 -, diff: -12178 --- dev_alloc_skb dev_alloc_skb | +37 Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-27[NET]: uninline skb_pull, de-bloats a lotIlpo Järvinen
Allyesconfig (v2.6.24-mm1): -28162 354 funcs, 3005 +, 31167 -, diff: -28162 --- skb_pull Without number of debug related CONFIGs (v2.6.25-rc2-mm1): -9697 338 funcs, 221 +, 9918 -, diff: -9697 --- skb_pull skb_pull | +44 Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-27[NET]: uninline skb_put, de-bloats a lotIlpo Järvinen
Allyesconfig (v2.6.24-mm1): ~500 files changed ... 869 funcs, 198 +, 111003 -, diff: -110805 --- skb_put skb_put | +104 Without number of debug related CONFIGs (v2.6.25-rc2-mm1): -60744 855 funcs, 861 +, 61605 -, diff: -60744 --- skb_put skb_put | +57 Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-03-05[IPV4]: Add 'rtable' field in struct sk_buff to alias 'dst' and avoid castsEric Dumazet
(Anonymous) unions can help us to avoid ugly casts. A common cast it the (struct rtable *)skb->dst one. Defining an union like : union { struct dst_entry *dst; struct rtable *rtable; }; permits to use skb->rtable in place. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-18net: fix kernel-doc warnings in header filesRandy Dunlap
Add missing structure kernel-doc descriptions to sock.h & skbuff.h to fix kernel-doc warnings. (I think that Stephen H. sent a similar patch, but I can't find it. I just want to kill the warnings, with either patch.) Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-04virtio: Implement skb_partial_csum_set, for setting partial csums on ↵Rusty Russell
untrusted packets. Use it in virtio_net (replacing buggy version there), it's also going to be used by TAP for partial csum support. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: David S. Miller <davem@davemloft.net>
2008-01-31[NETFILTER]: bridge netfilter: remove nf_bridge_info read-only netoutdev memberPatrick McHardy
Before the removal of the deferred output hooks, netoutdev was used in case of VLANs on top of a bridge to store the VLAN device, so the deferred hooks would see the correct output device. This isn't necessary anymore since we're calling the output hooks for the correct device directly in the IP stack. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-01-28[UDP]: Only increment counter on first peek/recvHerbert Xu
The previous move of the the UDP inDatagrams counter caused each peek of the same packet to be counted separately. This may be undesirable. This patch fixes this by adding a bit to sk_buff to record whether this packet has already been seen through skb_recv_datagram. We then only increment the counter when the packet is seen for the first time. The only dodgy part is the fact that skb_recv_datagram doesn't have a good way of returning this new bit of information. So I've added a new function __skb_recv_datagram that does return this and made skb_recv_datagram a wrapper around it. The plan is to eventually replace all uses of skb_recv_datagram with this new function at which time it can be renamed its proper name. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-01-28[UDP]: Avoid repeated counting of checksum errors due to peekingHerbert Xu
Currently it is possible for two processes to peek on the same socket and end up incrementing the error counter twice for the same packet. This patch fixes it by making skb_kill_datagram return whether it succeeded in unlinking the packet and only incrementing the counter if it did. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2008-01-28[TCP]: Splice receive support.Jens Axboe
Support for network splice receive. Signed-off-by: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-11-26[SKBUFF]: Free old skb properly in skb_morphHerbert Xu
The skb_morph function only freed the data part of the dst skb, but leaked the auxiliary data such as the netfilter fields. This patch fixes this by moving the relevant parts from __kfree_skb to skb_release_all and calling it in skb_morph. It also makes kfree_skbmem static since it's no longer called anywhere else and it now no longer does skb_release_data. Thanks to Yasuyuki KOZAKAI for finding this problem and posting a patch for it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-11-10[NET]: Fix skb_truesize_check() assertionChuck Lever
The intent of the assertion in skb_truesize_check() is to check for skb->truesize being decremented too much by other code, resulting in a wraparound below zero. The type of the right side of the comparison causes the compiler to promote the left side to an unsigned type, despite the presence of an explicit type cast. This defeats the check for negativity. Ensure both sides of the comparison are a signed type to prevent the implicit type conversion. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-23[NET]: Treat the sign of the result of skb_headroom() consistentlyChuck Lever
In some places, the result of skb_headroom() is compared to an unsigned integer, and in others, the result is compared to a signed integer. Make the comparisons consistent and correct. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-22[NET]: Cut off the queue_mapping field from sk_buffPavel Emelyanov
Just hide it behind the #ifdef, because nobody wants it now. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-22[NET]: Make and use skb_get_queue_mappingPavel Emelyanov
Make the helper for getting the field, symmetrical to the "set" one. Return 0 if CONFIG_NETDEVICES_MULTIQUEUE=n Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-22[NET]: Fix SKB_WITH_OVERHEAD calculationHerbert Xu
The calculation in SKB_WITH_OVERHEAD is incorrect in that it can cause an overflow across a page boundary which is what it's meant to prevent. In particular, the header length (X) should not be lumped together with skb_shared_info. The latter needs to be aligned properly while the header has no choice but to sit in front of wherever the payload is. Therefore the correct calculation is to take away the aligned size of skb_shared_info, and then subtract the header length. The resulting quantity L satisfies the following inequality: SKB_DATA_ALIGN(L + X) + sizeof(struct skb_shared_info) <= PAGE_SIZE This is the quantity used by alloc_skb to do the actual allocation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-15Merge branch 'master' of ↵Linus Torvalds
master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 * 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: (42 commits) [IPV6]: Consolidate the ip6_pol_route_(input|output) pair [TCP]: Make snd_cwnd_cnt 32-bit [TCP]: Update the /proc/net/tcp documentation [NETNS]: Don't panic on creating the namespace's loopback [NEIGH]: Ensure that pneigh_lookup is protected with RTNL [INET]: kmalloc+memset -> kzalloc in frag_alloc_queue [ISDN]: Fix compile with CONFIG_ISDN_X25 disabled. [IPV6]: Replace sk_buff ** with sk_buff * in input handlers [SELINUX]: Update for netfilter ->hook() arg changes. [INET]: Consolidate the xxx_put [INET]: Small cleanup for xxx_put after evictor consolidation [INET]: Consolidate the xxx_evictor [INET]: Consolidate the xxx_frag_destroy [INET]: Consolidate xxx_the secret_rebuild [INET]: Consolidate the xxx_frag_kill [INET]: Collect common frag sysctl variables together [INET]: Collect frag queues management objects together [INET]: Move common fields from frag_queues in one place. [TG3]: Fix performance regression on 5705. [ISDN]: Remove local copy of device name to make sure renames work. ...
2007-10-15[SKBUFF]: Add skb_morphHerbert Xu
This patch creates a new function skb_morph that's just like skb_clone except that it lets user provide the spare skb that will be overwritten by the one that's to be cloned. This will be used by IP fragment reassembly so that we get back the same skb that went in last (rather than the head skb that we get now which requires us to carry around double pointers all over the place). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-15Add skb_is_gso_v6Brice Goglin
Add skb_is_gso_v6(). Signed-off-by: Brice Goglin <brice@myri.com> Signed-off-by: Jeff Garzik <jeff@garzik.org>
2007-09-16[NET] skbuff: Add skb_cow_headHerbert Xu
This patch adds an optimised version of skb_cow that avoids the copy if the header can be modified even if the rest of the payload is cloned. This can be used in encapsulating paths where we only need to modify the header. As it is, this can be used in PPPOE and bridging. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-31[NET]: Page offsets and lengths need to be __u32.David S. Miller
Based upon a report from Stephen Rothwell. Signed-off-by: David S. Miller <davem@davemloft.net>