This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v3 2.6.39-rc1-tip 4/26] 4: uprobes: Breakground page replacement.


On Fri, 2011-04-01 at 20:03 +0530, Srikar Dronamraju wrote:

> +static int write_opcode(struct task_struct *tsk, struct uprobe * uprobe,
> +			unsigned long vaddr, uprobe_opcode_t opcode)
> +{
> +	struct page *old_page, *new_page;
> +	void *vaddr_old, *vaddr_new;
> +	struct vm_area_struct *vma;
> +	spinlock_t *ptl;
> +	pte_t *orig_pte;
> +	unsigned long addr;
> +	int ret;
> +
> +	/* Read the page with vaddr into memory */
> +	ret = get_user_pages(tsk, tsk->mm, vaddr, 1, 1, 1, &old_page, &vma);
> +	if (ret <= 0)
> +		return -EINVAL;

Why not return the actual gup() error?

> +	ret = -EINVAL;
> +
> +	/*
> +	 * We are interested in text pages only. Our pages of interest
> +	 * should be mapped for read and execute only. We desist from
> +	 * adding probes in write mapped pages since the breakpoints
> +	 * might end up in the file copy.
> +	 */
> +	if ((vma->vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)) !=
> +						(VM_READ|VM_EXEC))
> +		goto put_out;

Note how you return -EINVAL here when we're attempting to poke at the
wrong kind of mapping.

> +	/* Allocate a page */
> +	new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr);
> +	if (!new_page) {
> +		ret = -ENOMEM;
> +		goto put_out;
> +	}
> +
> +	/*
> +	 * lock page will serialize against do_wp_page()'s
> +	 * PageAnon() handling
> +	 */
> +	lock_page(old_page);
> +	/* copy the page now that we've got it stable */
> +	vaddr_old = kmap_atomic(old_page, KM_USER0);
> +	vaddr_new = kmap_atomic(new_page, KM_USER1);
> +
> +	memcpy(vaddr_new, vaddr_old, PAGE_SIZE);
> +	/* poke the new insn in, ASSUMES we don't cross page boundary */

Why not test this assertion with a VM_BUG_ON() or something.

> +	addr = vaddr;
> +	vaddr &= ~PAGE_MASK;
> +	memcpy(vaddr_new + vaddr, &opcode, uprobe_opcode_sz);
> +
> +	kunmap_atomic(vaddr_new, KM_USER1);
> +	kunmap_atomic(vaddr_old, KM_USER0);

The use of KM_foo is obsolete and un-needed.

> +	orig_pte = page_check_address(old_page, tsk->mm, addr, &ptl, 0);
> +	if (!orig_pte)
> +		goto unlock_out;
> +	pte_unmap_unlock(orig_pte, ptl);
> +
> +	lock_page(new_page);
> +	ret = anon_vma_prepare(vma);
> +	if (!ret)
> +		ret = replace_page(vma, old_page, new_page, *orig_pte);
> +
> +	unlock_page(new_page);
> +	if (ret != 0)
> +		page_cache_release(new_page);
> +unlock_out:
> +	unlock_page(old_page);
> +
> +put_out:
> +	put_page(old_page); /* we did a get_page in the beginning */
> +	return ret;
> +}
> +
> +/**
> + * read_opcode - read the opcode at a given virtual address.
> + * @tsk: the probed task.
> + * @vaddr: the virtual address to read the opcode.
> + * @opcode: location to store the read opcode.
> + *
> + * Called with tsk->mm->mmap_sem held (for read and with a reference to
> + * tsk->mm.
> + *
> + * For task @tsk, read the opcode at @vaddr and store it in @opcode.
> + * Return 0 (success) or a negative errno.
> + */
> +int __weak read_opcode(struct task_struct *tsk, unsigned long vaddr,
> +						uprobe_opcode_t *opcode)
> +{
> +	struct vm_area_struct *vma;
> +	struct page *page;
> +	void *vaddr_new;
> +	int ret;
> +
> +	ret = get_user_pages(tsk, tsk->mm, vaddr, 1, 0, 0, &page, &vma);
> +	if (ret <= 0)
> +		return -EFAULT;

Again, why not return the gup() error proper?

> +	ret = -EFAULT;
> +
> +	/*
> +	 * We are interested in text pages only. Our pages of interest
> +	 * should be mapped for read and execute only. We desist from
> +	 * adding probes in write mapped pages since the breakpoints
> +	 * might end up in the file copy.
> +	 */
> +	if ((vma->vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)) !=
> +						(VM_READ|VM_EXEC))
> +		goto put_out;

But now you return -EFAULT if we peek at the wrong kind of mapping,
which is inconsistent with the -EINVAL of write_opcode().

> +	lock_page(page);
> +	vaddr_new = kmap_atomic(page, KM_USER0);
> +	vaddr &= ~PAGE_MASK;
> +	memcpy(opcode, vaddr_new + vaddr, uprobe_opcode_sz);
> +	kunmap_atomic(vaddr_new, KM_USER0);

Again, loose the KM_foo.

> +	unlock_page(page);
> +	ret =  0;
> +
> +put_out:
> +	put_page(page); /* we did a get_page in the beginning */
> +	return ret;
> +}


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]