diff options
| author | Jérôme Glisse <jglisse@redhat.com> | 2014-09-11 11:22:12 -0400 | 
|---|---|---|
| committer | Jérôme Glisse <jglisse@redhat.com> | 2016-04-07 13:23:24 -0400 | 
| commit | 163236706880bc24412081a7794f2110292f4f58 (patch) | |
| tree | 50dfb9cf1dc109e6cca9eeecf9dcf552237b4c5e /mm/mremap.c | |
| parent | 6abff77f76d50c374057228baa0e01354d9bf1a1 (diff) | |
mmu_notifier: keep track of active invalidation ranges v5
The invalidate_range_start() and invalidate_range_end() can be
considered as forming an "atomic" section for the cpu page table
update point of view. Between this two function the cpu page
table content is unreliable for the address range being
invalidated.
This patch use a structure define at all place doing range
invalidation. This structure is added to a list for the duration
of the update ie added with invalid_range_start() and removed
with invalidate_range_end().
Helpers allow querying if a range is valid and wait for it if
necessary.
For proper synchronization, user must block any new range
invalidation from inside their invalidate_range_start() callback.
Otherwise there is no guarantee that a new range invalidation will
not be added after the call to the helper function to query for
existing range.
Changed since v1:
  - Fix a possible deadlock in mmu_notifier_range_wait_active()
Changed since v2:
  - Add the range to invalid range list before calling ->range_start().
  - Del the range from invalid range list after calling ->range_end().
  - Remove useless list initialization.
Changed since v3:
  - Improved commit message.
  - Added comment to explain how helpers function are suppose to be use.
  - English syntax fixes.
Changed since v4:
  - Syntax fixes.
  - Rename from range_*_valid to range_*active|inactive.
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Haggai Eran <haggaie@mellanox.com>
Diffstat (limited to 'mm/mremap.c')
| -rw-r--r-- | mm/mremap.c | 14 | 
1 files changed, 6 insertions, 8 deletions
| diff --git a/mm/mremap.c b/mm/mremap.c index 9544022ca67a..2d2bc4767f98 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -165,18 +165,17 @@ unsigned long move_page_tables(struct vm_area_struct *vma,  		bool need_rmap_locks)  {  	unsigned long extent, next, old_end; +	struct mmu_notifier_range range;  	pmd_t *old_pmd, *new_pmd;  	bool need_flush = false; -	unsigned long mmun_start;	/* For mmu_notifiers */ -	unsigned long mmun_end;		/* For mmu_notifiers */  	old_end = old_addr + len;  	flush_cache_range(vma, old_addr, old_end); -	mmun_start = old_addr; -	mmun_end   = old_end; -	mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, -					    mmun_end, MMU_MIGRATE); +	range.start = old_addr; +	range.end = old_end; +	range.event = MMU_MIGRATE; +	mmu_notifier_invalidate_range_start(vma->vm_mm, &range);  	for (; old_addr < old_end; old_addr += extent, new_addr += extent) {  		cond_resched(); @@ -228,8 +227,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,  	if (likely(need_flush))  		flush_tlb_range(vma, old_end-len, old_addr); -	mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, -					  mmun_end, MMU_MIGRATE); +	mmu_notifier_invalidate_range_end(vma->vm_mm, &range);  	return len + old_addr - old_end;	/* how much done */  } | 
