aboutsummaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2007-04-25[SK_BUFF]: Introduce ipip_hdr(), remove skb->h.ipiphArnaldo Carvalho de Melo
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce tcp_hdr(), remove skb->h.thArnaldo Carvalho de Melo
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[TCP]: Introduce tcp_hdrlen() and tcp_optlen()Arnaldo Carvalho de Melo
The ip_hdrlen() buddy, created to reduce the number of skb->h.th-> uses and to avoid the longer, open coded equivalent. Ditched a no-op in bnx2 in the process. I wonder if we should have a BUG_ON(skb->h.th->doff < 5) in tcp_optlen()... Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce icmp_hdr(), remove skb->h.icmphArnaldo Carvalho de Melo
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce udp_hdr(), remove skb->h.uhArnaldo Carvalho de Melo
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce igmp_hdr() & friends, remove skb->h.igmphArnaldo Carvalho de Melo
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[ICMP6]: Introduce icmp6_hdr()Arnaldo Carvalho de Melo
For consistency with all the other skb->h.raw accessors. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SCTP]: Introduce sctp_hdr()Arnaldo Carvalho de Melo
For consistency with all the other skb->h.raw accessors. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_set_transport_headerArnaldo Carvalho de Melo
For the cases where the transport header is being set to a offset from skb->data. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_transport_offset()Arnaldo Carvalho de Melo
For the quite common 'skb->h.raw - skb->data' sequence. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_reset_transport_header(skb)Arnaldo Carvalho de Melo
For the common, open coded 'skb->h.raw = skb->data' operation, so that we can later turn skb->h.raw into a offset, reducing the size of struct sk_buff in 64bit land while possibly keeping it as a pointer on 32bit. This one touches just the most simple cases: skb->h.raw = skb->data; skb->h.raw = {skb_push|[__]skb_pull}() The next ones will handle the slightly more "complex" cases. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce ipv6_hdr(), remove skb->nh.ipv6hArnaldo Carvalho de Melo
Now the skb->nh union has just one member, .raw, i.e. it is just like the skb->mac union, strange, no? I'm just leaving it like that till the transport layer is done with, when we'll rename skb->mac.raw to skb->mac_header (or ->mac_header_offset?), ditto for ->{h,nh}. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce arp_hdr(), remove skb->nh.arphArnaldo Carvalho de Melo
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce ip_hdr(), remove skb->nh.iphArnaldo Carvalho de Melo
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[IP]: Introduce ip_hdrlen()Arnaldo Carvalho de Melo
For the common sequence "skb->nh.iph->ihl * 4", removing a good number of open coded skb->nh.iph uses, now to go after the rest... Just out of curiosity, here are the idioms found to get the same result: skb->nh.iph->ihl << 2 skb->nh.iph->ihl<<2 skb->nh.iph->ihl * 4 skb->nh.iph->ihl*4 (skb->nh.iph)->ihl * sizeof(u32) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_set_network_headerArnaldo Carvalho de Melo
For the cases where the network header is being set to a offset from skb->data. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_network_header()Arnaldo Carvalho de Melo
For the places where we need a pointer to the network header, it is still legal to touch skb->nh.raw directly if just adding to, subtracting from or setting it to another layer header. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_network_offset()Arnaldo Carvalho de Melo
For the quite common 'skb->nh.raw - skb->data' sequence. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_reset_network_header(skb)Arnaldo Carvalho de Melo
For the common, open coded 'skb->nh.raw = skb->data' operation, so that we can later turn skb->nh.raw into a offset, reducing the size of struct sk_buff in 64bit land while possibly keeping it as a pointer on 32bit. This one touches just the most simple case, next will handle the slightly more "complex" cases. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[PPPOE]: Introduce pppoe_hdr()Arnaldo Carvalho de Melo
For consistency with all the other skb->nh.raw accessors. Also do some really obvious simplifications in pppoe_recvmsg, well the kfree_skb one is not so obvious, but free() and kfree() have the same behaviour (hint :-) ). Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[LLC]: Kill llc_set_pdu_hdrArnaldo Carvalho de Melo
We'll have skb_reset_network_header soon. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_mac_header()Arnaldo Carvalho de Melo
For the places where we need a pointer to the mac header, it is still legal to touch skb->mac.raw directly if just adding to, subtracting from or setting it to another layer header. This one also converts some more cases to skb_reset_mac_header() that my regex missed as it had no spaces before nor after '=', ugh. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_set_mac_header()Arnaldo Carvalho de Melo
For the cases where we want to set skb->mac.raw to an offset from skb->data. Simple cases first, the memmove ones and specially pktgen will be left for later. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[SK_BUFF]: Introduce skb_reset_mac_header(skb)Arnaldo Carvalho de Melo
For the common, open coded 'skb->mac.raw = skb->data' operation, so that we can later turn skb->mac.raw into a offset, reducing the size of struct sk_buff in 64bit land while possibly keeping it as a pointer on 32bit. This one touches just the most simple case, next will handle the slightly more "complex" cases. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[NET]: Adding SO_TIMESTAMPNS / SCM_TIMESTAMPNS supportEric Dumazet
Now that network timestamps use ktime_t infrastructure, we can add a new SOL_SOCKET sockopt SO_TIMESTAMPNS. This command is similar to SO_TIMESTAMP, but permits transmission of a 'timespec struct' instead of a 'timeval struct' control message. (nanosecond resolution instead of microsecond) Control message is labelled SCM_TIMESTAMPNS instead of SCM_TIMESTAMP A socket cannot mix SO_TIMESTAMP and SO_TIMESTAMPNS : the two modes are mutually exclusive. sock_recv_timestamp() became too big to be fully inlined so I added a __sock_recv_timestamp() helper function. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> CC: linux-arch@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[NET]: Replace CONFIG_NET_DEBUG with sysctl.Stephen Hemminger
Covert network warning messages from a compile time to runtime choice. Removes kernel config option and replaces it with new /proc/sys/net/core/warnings. Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[NET]: Introduce SIOCGSTAMPNS ioctl to get timestamps with nanosec resolutionEric Dumazet
Now network timestamps use ktime_t infrastructure, we can add a new ioctl() SIOCGSTAMPNS command to get timestamps in 'struct timespec'. User programs can thus access to nanosecond resolution. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> CC: Stephen Hemminger <shemminger@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[TCP]: Abstract out all write queue operations.David S. Miller
This allows the write queue implementation to be changed, for example, to one which allows fast interval searching. Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[UDP]: Clean up UDP-Lite receive checksumHerbert Xu
This patch eliminates some duplicate code for the verification of receive checksums between UDP-Lite and UDP. It does this by introducing __skb_checksum_complete_head which is identical to __skb_checksum_complete_head apart from the fact that it takes a length parameter rather than computing the first skb->len bytes. As a result UDP-Lite will be able to use hardware checksum offload for packets which do not use partial coverage checksums. It also means that UDP-Lite loopback no longer does unnecessary checksum verification. If any NICs start support UDP-Lite this would also start working automatically. This patch removes the assumption that msg_flags has MSG_TRUNC clear upon entry in recvmsg. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[NETLINK]: Limit NLMSG_GOODSIZE to 8K.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[IPV6] ADDRCONF: Optimistic Duplicate Address Detection (RFC 4429) Support.Neil Horman
Nominally an autoconfigured IPv6 address is added to an interface in the Tentative state (as per RFC 2462). Addresses in this state remain in this state while the Duplicate Address Detection process operates on them to determine their uniqueness on the network. During this period, these tentative addresses may not be used for communication, increasing the time before a node may be able to communicate on a network. Using Optimistic Duplicate Address Detection, autoconfigured addresses may be used immediately for communication on the network, as long as certain rules are followed to avoid conflicts with other nodes during the Duplicate Address Detection process. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[NET]: convert network timestamps to ktime_tEric Dumazet
We currently use a special structure (struct skb_timeval) and plain 'struct timeval' to store packet timestamps in sk_buffs and struct sock. This has some drawbacks : - Fixed resolution of micro second. - Waste of space on 64bit platforms where sizeof(struct timeval)=16 I suggest using ktime_t that is a nice abstraction of high resolution time services, currently capable of nanosecond resolution. As sizeof(ktime_t) is 8 bytes, using ktime_t in 'struct sock' permits a 8 byte shrink of this structure on 64bit architectures. Some other structures also benefit from this size reduction (struct ipq in ipv4/ip_fragment.c, struct frag_queue in ipv6/reassembly.c, ...) Once this ktime infrastructure adopted, we can more easily provide nanosecond resolution on top of it. (ioctl SIOCGSTAMPNS and/or SO_TIMESTAMPNS/SCM_TIMESTAMPNS) Note : this patch includes a bug correction in compat_sock_get_timestamp() where a "err = 0;" was missing (so this syscall returned -ENOENT instead of 0) Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> CC: Stephen Hemminger <shemminger@linux-foundation.org> CC: John find <linux.kernel@free.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[NET]: div64_64 consolidate (rev3)Stephen Hemminger
Here is the current version of the 64 bit divide common code. Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[NET]: Convert xtime.tv_sec to get_seconds()James Morris
Where appropriate, convert references to xtime.tv_sec to the get_seconds() helper function. Signed-off-by: James Morris <jmorris@namei.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[NET]: Keep sk_backlog near sk_lockEric Dumazet
sk_backlog is a critical field of struct sock. (known famous words) It is (ab)used in hot paths, in particular in release_sock(), tcp_recvmsg(), tcp_v4_rcv(), sk_receive_skb(). It really makes sense to place it next to sk_lock, because sk_backlog is only used after sk_lock locked (and thus memory cache line in L1 cache). This should reduce cache misses and sk_lock acquisition time. (In theory, we could only move the head pointer near sk_lock, and leaving tail far away, because 'tail' is normally not so hot, but keep it simple :) ) Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[TCP]: Add two new spurious RTO responses to FRTOIlpo Järvinen
New sysctl tcp_frto_response is added to select amongst these responses: - Rate halving based; reuses CA_CWR state (default) - Very conservative; used to be the only one available (=1) - Undo cwr; undoes ssthresh and cwnd reductions (=2) The response with rate halving requires a new parameter to tcp_enter_cwr because FRTO has already reduced ssthresh and doing a second reduction there has to be prevented. In addition, to keep things nice on 80 cols screen, a local variable was added. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[TCP]: Make snd_cwnd_clamp a u32.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[TCP]: Keep copied_seq, rcv_wup and rcv_next together.Eric Dumazet
I noticed in oprofile study a cache miss in tcp_rcv_established() to read copied_seq. ffffffff80400a80 <tcp_rcv_established>: /* tcp_rcv_established total: 4034293   2.0400 */  55493  0.0281 :ffffffff80400bc9:   mov    0x4c8(%r12),%eax copied_seq 543103  0.2746 :ffffffff80400bd1:   cmp    0x3e0(%r12),%eax   rcv_nxt     if (tp->copied_seq == tp->rcv_nxt &&         len - tcp_header_len <= tp->ucopy.len) { In this function, the cache line 0x4c0 -> 0x500 is used only for this reading 'copied_seq' field. rcv_wup and copied_seq should be next to rcv_nxt field, to lower number of active cache lines in hot paths. (tcp_rcv_established(), tcp_poll(), ...) As you suggested, I changed tcp_create_openreq_child() so that these fields are changed together, to avoid adding a new store buffer stall. Patch is 64bit friendly (no new hole because of alignment constraints) Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[TCP]: Add RFC3742 Limited Slow-Start, controlled by variable ↵John Heffner
sysctl_tcp_max_ssthresh. Signed-off-by: John Heffner <jheffner@psc.edu> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[TCP] FRTO: Entry is allowed only during (New)Reno like recoveryIlpo Järvinen
This interpretation comes from RFC4138: "If the sender implements some loss recovery algorithm other than Reno or NewReno [FHG04], the F-RTO algorithm SHOULD NOT be entered when earlier fast recovery is underway." I think the RFC means to say (especially in the light of Appendix B) that ...recovery is underway (not just fast recovery) or was underway when it was interrupted by an earlier (F-)RTO that hasn't yet been resolved (snd_una has not advanced enough). Thus, my interpretation is that whenever TCP has ever retransmitted other than head, basic version cannot be used because then the order assumptions which are used as FRTO basis do not hold. NewReno has only the head segment retransmitted at a time. Therefore, walk up to the segment that has not been SACKed, if that segment is not retransmitted nor anything before it, we know for sure, that nothing after the non-SACKed segment should be either. This assumption is valid because TCPCB_EVER_RETRANS does not leave holes but each non-SACKed segment is rexmitted in-order. Check for retrans_out > 1 avoids more expensive walk through the skb list, as we can know the result beforehand: F-RTO will not be allowed. SACKed skb can turn into non-SACked only in the extremely rare case of SACK reneging, in this case we might fail to detect retransmissions if there were them for any other than head. To get rid of that feature, whole rexmit queue would have to be walked (always) or FRTO should be prevented when SACK reneging happens. Of course RTO should still trigger after reneging which makes this issue even less likely to show up. And as long as the response is as conservative as it's now, nothing bad happens even then. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25[TCP] FRTO: Moved tcp_use_frto from tcp.h to tcp_input.cIlpo Järvinen
In addition, removed inline. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-24Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6Linus Torvalds
* master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: [BNX2]: Fix occasional NETDEV WATCHDOG on 5709. [IPV6]: Disallow RH0 by default. [XFRM]: beet: fix pseudo header length value [TCP]: Congestion control initialization.
2007-04-24[IPV6]: Disallow RH0 by default.YOSHIFUJI Hideaki
A security issue is emerging. Disallow Routing Header Type 0 by default as we have been doing for IPv4. Note: We allow RH2 by default because it is harmless. Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-24Taskstats fix the structure members alignment issueBalbir Singh
We broke the the alignment of members of taskstats to the 8 byte boundary with the CSA patches. In the current kernel, the taskstats structure is not suitable for use by 32 bit applications in a 64 bit kernel. On x86_64 Offsets of taskstats' members (64 bit kernel, 64 bit application) @taskstats'offsetof[@taskstats'indices] = ( 0, # version 4, # ac_exitcode 8, # ac_flag 9, # ac_nice 16, # cpu_count 24, # cpu_delay_total 32, # blkio_count 40, # blkio_delay_total 48, # swapin_count 56, # swapin_delay_total 64, # cpu_run_real_total 72, # cpu_run_virtual_total 80, # ac_comm 112, # ac_sched 113, # ac_pad 116, # ac_uid 120, # ac_gid 124, # ac_pid 128, # ac_ppid 132, # ac_btime 136, # ac_etime 144, # ac_utime 152, # ac_stime 160, # ac_minflt 168, # ac_majflt 176, # coremem 184, # virtmem 192, # hiwater_rss 200, # hiwater_vm 208, # read_char 216, # write_char 224, # read_syscalls 232, # write_syscalls 240, # read_bytes 248, # write_bytes 256, # cancelled_write_bytes ); Offsets of taskstats' members (64 bit kernel, 32 bit application) @taskstats'offsetof[@taskstats'indices] = ( 0, # version 4, # ac_exitcode 8, # ac_flag 9, # ac_nice 12, # cpu_count 20, # cpu_delay_total 28, # blkio_count 36, # blkio_delay_total 44, # swapin_count 52, # swapin_delay_total 60, # cpu_run_real_total 68, # cpu_run_virtual_total 76, # ac_comm 108, # ac_sched 109, # ac_pad 112, # ac_uid 116, # ac_gid 120, # ac_pid 124, # ac_ppid 128, # ac_btime 132, # ac_etime 140, # ac_utime 148, # ac_stime 156, # ac_minflt 164, # ac_majflt 172, # coremem 180, # virtmem 188, # hiwater_rss 196, # hiwater_vm 204, # read_char 212, # write_char 220, # read_syscalls 228, # write_syscalls 236, # read_bytes 244, # write_bytes 252, # cancelled_write_bytes ); This is one way to solve the problem without re-arranging structure members is to pack the structure. The patch adds an __attribute__((aligned(8))) to the taskstats structure members so that 32 bit applications using taskstats can work with a 64 bit kernel. Using __attribute__((packed)) would break the 64 bit alignment of members. The fix was tested on x86_64. After the fix, we got Offsets of taskstats' members (64 bit kernel, 64 bit application) @taskstats'offsetof[@taskstats'indices] = ( 0, # version 4, # ac_exitcode 8, # ac_flag 9, # ac_nice 16, # cpu_count 24, # cpu_delay_total 32, # blkio_count 40, # blkio_delay_total 48, # swapin_count 56, # swapin_delay_total 64, # cpu_run_real_total 72, # cpu_run_virtual_total 80, # ac_comm 112, # ac_sched 113, # ac_pad 120, # ac_uid 124, # ac_gid 128, # ac_pid 132, # ac_ppid 136, # ac_btime 144, # ac_etime 152, # ac_utime 160, # ac_stime 168, # ac_minflt 176, # ac_majflt 184, # coremem 192, # virtmem 200, # hiwater_rss 208, # hiwater_vm 216, # read_char 224, # write_char 232, # read_syscalls 240, # write_syscalls 248, # read_bytes 256, # write_bytes 264, # cancelled_write_bytes ); Offsets of taskstats' members (64 bit kernel, 32 bit application) @taskstats'offsetof[@taskstats'indices] = ( 0, # version 4, # ac_exitcode 8, # ac_flag 9, # ac_nice 16, # cpu_count 24, # cpu_delay_total 32, # blkio_count 40, # blkio_delay_total 48, # swapin_count 56, # swapin_delay_total 64, # cpu_run_real_total 72, # cpu_run_virtual_total 80, # ac_comm 112, # ac_sched 113, # ac_pad 120, # ac_uid 124, # ac_gid 128, # ac_pid 132, # ac_ppid 136, # ac_btime 144, # ac_etime 152, # ac_utime 160, # ac_stime 168, # ac_minflt 176, # ac_majflt 184, # coremem 192, # virtmem 200, # hiwater_rss 208, # hiwater_vm 216, # read_char 224, # write_char 232, # read_syscalls 240, # write_syscalls 248, # read_bytes 256, # write_bytes 264, # cancelled_write_bytes ); Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com> Cc: Jay Lan <jlan@engr.sgi.com> Cc: Shailabh Nagar <nagar@watson.ibm.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-04-20Merge branch 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linusLinus Torvalds
* 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linus: [MIPS] Fix wrong checksum for split TCP packets on 64-bit MIPS [MIPS] Fix BUG(), BUG_ON() handling [MIPS] Retry {save,restore}_fp_context if failed in atomic context. [MIPS] Disallow CpU exception in kernel again. [MIPS] Add missing silicon revisions for BCM112x
2007-04-20NFS: clean up the unstable write codeTrond Myklebust
Get rid of the inlined #ifdefs. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-04-20[MIPS] Fix wrong checksum for split TCP packets on 64-bit MIPSDave Johnson
I've traced down an off-by-one TCP checksum calculation error under the following conditions: 1) The TCP code needs to split a full-sized packet due to a reduced MSS (typically due to the addition of TCP options mid-stream like SACK). _AND_ 2) The checksum of the 2nd fragment is larger than the checksum of the original packet. After subtraction this results in a checksum for the 1st fragment with bits 16..31 set to 1. (this is ok) _AND_ 3) The checksum of the 1st fragment's TCP header plus the previously 32bit checksum of the 1st fragment DOES NOT cause a 32bit overflow when added together. This results in a checksum of the TCP header plus TCP data that still has the upper 16 bits as 1's. _THEN_ 4) The TCP+data checksum is added to the checksum of the pseudo IP header with csum_tcpudp_nofold() incorrectly (the bug). The problem is the checksum of the TCP+data is passed to csum_tcpudp_nofold() as an 32bit unsigned value, however the assembly code acts on it as if it is a 64bit unsigned value. This causes an incorrect 32->64bit extension if the sum has bit 31 set. The resulting checksum is off by one. This problems is data and TCP header dependent due to #2 and #3 above so it doesn't occur on every TCP packet split. Signed-off-by: Dave Johnson <djohnson+linux-mips@sw.starentnetworks.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-04-20[MIPS] Fix BUG(), BUG_ON() handlingAtsushi Nemoto
With commit 63dc68a8cf60cb110b147dab1704d990808b39e2, kernel can not handle BUG() and BUG_ON() properly since get_user() returns false for kernel code. Use __get_user() to skip unnecessary access_ok(). This patch also make BRK_BUG code encoded in the TNE instruction. Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-04-20[MIPS] Retry {save,restore}_fp_context if failed in atomic context.Atsushi Nemoto
The save_fp_context()/restore_fp_context() might sleep on accessing user stack and therefore might lose FPU ownership in middle of them. If these function failed due to "in_atomic" test in do_page_fault, touch the sigcontext area in non-atomic context and retry these save/restore operation. This is a replacement of a (broken) fix which was titled "Allow CpU exception in kernel partially". Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2007-04-20[MIPS] Disallow CpU exception in kernel again.Atsushi Nemoto
The commit 4d40bff7110e9e1a97ff8c01bdd6350e9867cc10 ("Allow CpU exception in kernel partially") was broken. The commit was to fix theoretical problem but broke usual case. Revert it for now. Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>