Age | Commit message (Collapse) | Author |
|
Type of 'byte_addr' needes to be 'unsigned int' for 512 byte
ECC support.
Signed-off-by: Vimal Singh <vimalsingh@ti.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
Minimal support for the 4-bit ECC engine found on DM355, DM365,
DA830/OMAP-L137, and similar recent DaVinci-family chips.
This is limited to small-page flash for now; there are some page
layout issues for large page chips. Note that most boards using
this engine (like the DM355 EVM) include 2GiB large page chips.
Sanity tested on DM355 EVM after swapping the socketed NAND for
a small-page one.
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
Make the DaVinci NAND driver require platform_data with
board-specific configuration. We can't actually do any
kind of sane job of configuring it otherwise.
Also fix the comment about picking the "best" ECC mode.
We can't do those any more; that relied on knowing what kind
of CPU we're using (they don't all support 4-bit ECC), and
current policy is that drivers not have cpu_is_*() checks.
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
Resolve issue noted by Sneha: when computing oobavail from
the list of free areas in the OOB, don't assume there will
always be an unused slot at the end. With ECC_HW_SYNDROME
and 4KiB page chips, it's fairly likely there *won't* be one.
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Cc: "Narnakaje, Snehaprabha" <nsnehaprabha@ti.com>"
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
With CONFIG_HOTPLUG=n, the following eror occurred during link:
local symbol 0: discarded in section `.devexit.text' from
drivers/built-in.o
It was caused by improper section reference. The __devexit_p()
should be added to the .remove function.
Signed-off-by: Thomas Chou <thomas@wytron.com.tw>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
[Artem: re-worked the patch: made it release resources when the
module is unloaded, made it do module referencing, made it really
independent on UBI, tested it with the UBI test-suite which can
be found in ubi-2.6.git/tests/ubi-tests, re-named most of the
funcs/variables to get rid of the "ubi" word and make names
consistent.]
Signed-off-by: Dmitry Pervushin <dpervushin@embeddedalley.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Remove built-in gluebi support. This is a preparation for a
standalone glubi module support
Signed-off-by: Dmitry Pervushin <dpervushin@embeddedalley.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
UBI volume notifications are intended to create the API to get clients
notified about volume creation/deletion, renaming and re-sizing. A
client can subscribe to these notifications using 'ubi_volume_register()'
and cancel the subscription using 'ubi_volume_unregister()'. When UBI
volumes change, a blocking notifier is called. Clients also can request
"added" events on all volumes that existed before client subscribed
to the notifications.
If we use notifications instead of calling functions like 'ubi_gluebi_xxx()',
we can make the MTD emulation layer to be more flexible: build it as a
separate module and load/unload it on demand.
[Artem: many cleanups, rework locking, add "updated" event, provide
device/volume info in notifiers]
Signed-off-by: Dmitry Pervushin <dpervushin@embeddedalley.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This patch improves UBI errors handling. ATM UBI switches to
R/O mode when the WL worker fails to read the source PEB.
This means that the upper layers (e.g., UBIFS) has no
chances to unmap the erroneous PEB and fix the error.
This patch changes this behaviour and makes UBI put PEBs
like this into a separate RB-tree, thus preventing the
WL worker from hitting the same read errors again and
again.
But there is a 10% limit on a maximum amount of PEBs like this.
If there are too much of them, UBI switches to R/O mode.
Additionally, this patch teaches UBI not to panic and
switch to R/O mode if after a PEB has been copied, the
target LEB cannot be read back. Instead, now UBI cancels
the operation and schedules the target PEB for torturing.
The error paths has been tested by ingecting errors
into 'ubi_eba_copy_leb()'.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This patch fixes the error path in the WL worker - in same cases
UBI oopses when 'goto out_error' happens and e1 or e2 are NULL.
This patch also cleans up the error paths a little. And I have
tested nearly all error paths in the WL worker.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This patch is a clean-up and a preparation for the following
patches. It introduece constants for the return values of the
'ubi_eba_copy_leb()' function.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
This patch allows commandline partition processing to
work with the s3c2410 NAND platform driver.
Signed-off-by: Andy Green <andy@warcat.com>
Signed-off-by: Nelson Castillo <arhuaco@freaks-unidos.net>
[ben-linux@fluff.org: Change andy@openmoko.com to andy@warmcat.com]
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
|
|
Fix NAND CFG debug order.
Signed-off-by: Andy Green <andy@warmcat.com>
Signed-off-by: Nelson Castillo <arhuaco@freaks-unidos.net>
[ben-linux@fluff.org: Change andy@openmoko.com to andy@warmcat.com, subject cleanup]
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
|
|
~ Avoid warning without generating code.
(I don't even get the warning without the macro uninitialized_var).
Signed-off-by: Nelson Castillo <arhuaco@freaks-unidos.net>
[ben-linux@fluff.org: subject cleanup]
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
|
|
This makes us take note about the chosen ECC mode per-chip and
not the one set globally.
Signed-off-by: Andy Green <andy@warmcat.com>
Signed-off-by: Nelson Castillo <arhuaco@freaks-unidos.net>
[ben-linux@fluff.org: andy@openmoko.com => andy@warmcat.com, rewrite subject]
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
|
|
Move to using kerneldoc style commenting in the driver
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
|
|
Commit 57fee4a58fe802272742caae248872c392a60670 added an
method to specify the platform device compatibility by using
an id-table instead of registering multiple drivers.
Move the S3C24XX NAND driver to using this ID table.
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
CC: Eric Miao <eric.miao@marvell.com>
|
|
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
Remove all references to MTD ioctls from fs/compat_ioctl.c and let
them all be handled by mtd_compat_ioctl().
Signed-off-by: Kevin Cernekee <kpc.mtd@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
Signed-off-by: Kevin Cernekee <kpc.mtd@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
1) Move the MEMREADOOB/MEMWRITEOOB compat_ioctl wrappers from
fs/compat_ioctl.c into mtdchar.c . Original request was here:
http://lkml.org/lkml/2009/4/1/295
2) Add missing COMPATIBLE_IOCTL lines, so that mtd-utils does not error
out when running in 64/32 compatibility mode.
LKML-Reference: <200904011650.22928.arnd@arndb.de>
Signed-off-by: Kevin Cernekee <kpc.mtd@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
New MEMERASE/MEMREADOOB/MEMWRITEOOB ioctls are needed in order to support
64-bit offsets into large NAND flash devices.
Signed-off-by: Kevin Cernekee <kpc.mtd@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
As pointed out by Kay Sievers, the name size limit is gone
from the driver-core, and BUS_ID_SIZE is obsolescent.
Rather than just papering over the problem by replacing the mtdname
array size with an arbitrary '20 + 2', fix the problem properly and
handle arbitrary name sizes.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
This patch adds MTD concatenation support to integrator-flash.c for
platforms with more than one block of flash memory (e.g. RealView
PB11MPCore). The implementation is based on the sa1100-flash.c one.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
We'll fix it up again, but for now I don't think anyone really cares.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
The following patch fixes:
- re-initialization of host->col_addr which is used as byte index
between the successive READID flash commands.
- compile error when CONFIG_PM is enabled
- pass on the error code from clk_get()
- return -ENOMEM in case of failed ioremap()
- pass on the return value of platform_driver_probe() directly
- remove excessive printk
- let command line partition table parsing with mxc_nand name.
The cmd_line parsing is done via <mtd-id> name that differs
from mxc_nand by default and looks like "NAND 256MiB 1,8V 8-bit"
Signed-off-by: Vladimir Barinov <vbarinov@embeddedalley.com>
Signed-off-by: Lothar Wassmann <LW@KARO-electronics.de>
Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
This patch is to sync the core linux-omap PM code with mainline. This
code has evolved and been used for a while the linux-omap tree, but
the attempt here is to finally get this into mainline.
Following this will be a series of patches from the 'PM branch' of the
linux-omap tree to add full PM hardware support from the linux-omap
tree.
Much of this PM core code was written by Jouni Hogander with
significant contributions from Paul Walmsley as well as many others
from Nokia, Texas Instruments and linux-omap community.
Signed-off-by: Jouni Hogander <jouni.hogander@nokia.com>
Cc: Paul Walmsley <paul@pwsan.com>
Signed-off-by: Kevin Hilman <khilman@deeprootsystems.com>
|
|
This breaks the dilnetpc map driver, but it could be fixed not to use
that option. We want to simplify the partition handling, and this is a
step towards that.
Remove superfluous 'index' field from private struct mtd_part too, while
we're at it.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
Now the MTD core will do this for us, we don't need to hook it up from
the board drivers.
Shame we can't do shutdown from the class too...
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
This is intended to suspend/resume the _chip_, while we leave board
drivers to handle their own suspend/resume for the controller.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
This patch fixes a minor problem where we may fail to wake
upe the UBI background thread. This is not fatal at all,
it may just result at sligtly worse performace for a short
period of time, just because the thread will be woken up
when real I/O on the UBI starts.
Anywey, the issue is the race condition between
'ubi_attach_mtd_dev()' and 'ubi_thread()'. If we do not
serialize them, the 'wake_up_process()' call may be done
before 'ubi_thread()' went seep, but after it checked
'ubi->thread_enabled'.
This issue was spotted by Shin Hong <hongshin@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Until now we have had a 1:1 mapping between storage device physical
block size and the logical block sized used when addressing the device.
With SATA 4KB drives coming out that will no longer be the case. The
sector size will be 4KB but the logical block size will remain
512-bytes. Hence we need to distinguish between the physical block size
and the logical ditto.
This patch renames hardsect_size to logical_block_size.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
Conflicts:
drivers/block/hd.c
drivers/block/mg_disk.c
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
Commit 5b7f3a50 (fix dataflash 64-bit divisions) unfortunately
introduced a typo. Erase addr and len were swapped in the pageaddr
calculation, causing the wrong sectors to get erased.
Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk>
Acked-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The @vol->upd_marker should be protected by the @ubi->device_mutex,
otherwise 'paranoid_check_volume()' complains sometimes because
vol->upd_marker is 1 while vtbl_rec->upd_marker is 0.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
If a volume paranoid check fails, do not return an error
code to the caller, but just print error messages and go
forward. The primary reason for this is that it is difficult
to recover and cancel the operation at that stage.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
I am experiencing an error in 'paranoid_check_volume()'. Add
dump_stack() there to make it easier to identify the reasons
of the error.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
When paranoid checs are enabled, the 'io_paral' test from the
'mtd-utils' package fails. The symptoms are:
UBI error: paranoid_check_all_ff: flash region at PEB 3973:512, length 15872 does not contain all 0xFF bytes
UBI error: paranoid_check_all_ff: paranoid check failed for PEB 3973
UBI: hex dump of the 512-16384 region
It turned out to be a bug in the checking function. Suppose there
are 2 tasks - A and B. Task A is the wear-levelling working
('wear_leveling_worker()'). It is reading the VID header to find
which LEB this PEB belongs to. Say, task A is reading header
of PEB X. Suppose PEB X is unmapped, and has no VID header.
Task B is trying to write to PEB X.
Task A: in 'ubi_io_read_vid_hdr()': reads the VID header from PEB X.
The read data contain all 0xFF bytes.
Task B: writes VID header and some data to PEB X
Task A: assumes PEB X is empty, calls 'paranoid_check_all_ff()', which
fails.
The solution for this problem is to make 'paranoid_check_all_ff()'
re-read the VID header, re-check it, and only if it is not there,
check the rest. This now implemented by the 'paranoid_check_empty()'
function.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
The @ubi->dbg_peb_buf is needed only when paranoid checks are
enabled, not when debugging in general is enabled.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Various minor improvements to the debugging messages which
I found useful while hunting problems.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
The mutex essencially protects the entire UBI device, so the
old @volumes_mutex name is a little misleading.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
The @mult_mutex does not serve any purpose. We already have
@volumes_mutex and it is enough. The @volume mutex is pushed
down to the 'ubi_rename_volumes()', because we want first
to open all volumes in the exclusive mode, and then lock the
mutex, just like all other ioctl's (remove, re-size, etc) do.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
|
|
Till now block layer allowed two separate modes of request execution.
A request is always acquired from the request queue via
elv_next_request(). After that, drivers are free to either dequeue it
or process it without dequeueing. Dequeue allows elv_next_request()
to return the next request so that multiple requests can be in flight.
Executing requests without dequeueing has its merits mostly in
allowing drivers for simpler devices which can't do sg to deal with
segments only without considering request boundary. However, the
benefit this brings is dubious and declining while the cost of the API
ambiguity is increasing. Segment based drivers are usually for very
old or limited devices and as converting to dequeueing model isn't
difficult, it doesn't justify the API overhead it puts on block layer
and its more modern users.
Previous patches converted all block low level drivers to dequeueing
model. This patch completes the API transition by...
* renaming elv_next_request() to blk_peek_request()
* renaming blkdev_dequeue_request() to blk_start_request()
* adding blk_fetch_request() which is combination of peek and start
* disallowing completion of queued (not started) requests
* applying new API to all LLDs
Renamings are for consistency and to break out of tree code so that
it's apparent that out of tree drivers need updating.
[ Impact: block request issue API cleanup, no functional change ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Mike Miller <mike.miller@hp.com>
Cc: unsik Kim <donari75@gmail.com>
Cc: Paul Clements <paul.clements@steeleye.com>
Cc: Tim Waugh <tim@cyberelk.net>
Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Laurent Vivier <Laurent@lvivier.info>
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: Stefan Weinhuber <wein@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
mtd_blkdevs processes requests one-by-one synchronously from a kthread
and can be easily converted to dequeueing model. Convert it.
[ Impact: dequeue in-flight request ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
With the previous changes, the followings are now guaranteed for all
requests in any valid state.
* blk_rq_sectors() == blk_rq_bytes() >> 9
* blk_rq_cur_sectors() == blk_rq_cur_bytes() >> 9
Clean up accessor usages. Notable changes are
* nbd,i2o_block: end_all used instead of explicit byte count
* scsi_lib: unnecessary conditional on request type removed
[ Impact: cleanup ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Paul Clements <paul.clements@steeleye.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
With recent cleanups, there is no place where low level driver
directly manipulates request fields. This means that the 'hard'
request fields always equal the !hard fields. Convert all
rq->sectors, nr_sectors and current_nr_sectors references to
accessors.
While at it, drop superflous blk_rq_pos() < 0 test in swim.c.
[ Impact: use pos and nr_sectors accessors ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Tested-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Tested-by: Adrian McMenamin <adrian@mcmen.demon.co.uk>
Acked-by: Adrian McMenamin <adrian@mcmen.demon.co.uk>
Acked-by: Mike Miller <mike.miller@hp.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Eric Moore <Eric.Moore@lsi.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Paul Clements <paul.clements@steeleye.com>
Cc: Tim Waugh <tim@cyberelk.net>
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Dario Ballabio <ballabio_dario@emc.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: unsik Kim <donari75@gmail.com>
Cc: Laurent Vivier <Laurent@lvivier.info>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
* git://git.infradead.org/mtd-2.6:
mtd: fix timeout in M25P80 driver
mtd: Bug in m25p80.c during whole-chip erase
mtd: expose subpage size via sysfs
mtd: mtd in mtd_release is unused without CONFIG_MTD_CHAR
|