Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to make use of the first page(s)? #53

Open
pepyakin opened this issue Feb 14, 2018 · 5 comments
Open

Is it possible to make use of the first page(s)? #53

pepyakin opened this issue Feb 14, 2018 · 5 comments

Comments

@pepyakin
Copy link
Member

First page(s) contains static data, bss and the stack. When allocator (e.g. wee_alloc) is asked to allocate for the first time, it mounts a new page. So the space after the stack to the end of the initial page is wasted.

This space might be useful in blockchain applications, since there user should literally pay for the each memory page.

To make use of this space, we need a some way to find out the end (or should I say the start, b/c the stack grows downward) of the stack region. I imagine that something like the linker script would come in handy.

@alexcrichton
Copy link
Contributor

This I believe has since been solved with --stack-first option in LLD where the first pages are used for stack space now

@fitzgen fitzgen reopened this Jul 18, 2018
@fitzgen
Copy link
Member

fitzgen commented Jul 18, 2018

This is about using the first bits of heap memory after stack and data and before allocating fresh pages.

Whether stack or data comes first, right now there is an unused portion of the first page (or one of the first n pages):

|           Page 0          |          Page 1      |      Page 2      |       |
+---------------------------+----------------------+------------------+--//---+
| stack | data | unused     | heap --->                                  \\   |
+---------------------------+----------------------+------------------+--//---+

I believe that @pepyakin is requesting that we figure out how to enable this:

|           Page 0          |          Page 1      |      Page 2      |       |
+---------------------------+----------------------+------------------+--//---+
| stack | data | heap --->                                               \\   |
+---------------------------+----------------------+------------------+--//---+

@fitzgen
Copy link
Member

fitzgen commented Jul 18, 2018

I think lld exposes a __heap_base variable that allocators can use to leverage the remainder of the stack/data pages. So I think there are two parts here:

  1. Expose lld's __heap_base through core::arch::wasm32

  2. Get allocators that target wasm32 to use that (presumably only if they are the global allocator)

@alexcrichton
Copy link
Contributor

Hm I actually thought LLD aligned data differently, along the lines of:

|           Page 0          |          Page 1      |      Page 2      |       |
+---------------------------+----------------------+------------------+--//---+
|              stack | data | heap --->                                  \\   |
+---------------------------+----------------------+------------------+--//---+

but it apparently does not as this wasm file:

#![crate_type = "cdylib"]

static A: usize = 3;

#[no_mangle]
pub extern fn foo() -> usize { &A as *const usize as usize }

generates:

(module
  (type $t0 (func (result i32)))
  (func $foo (export "foo") (type $t0) (result i32)
    (i32.const 1048576))
  (table $T0 1 1 anyfunc)
  (memory $memory (export "memory") 17)
  (global $__heap_base (export "__heap_base") i32 (i32.const 1048580))
  (global $__data_end (export "__data_end") i32 (i32.const 1048580))
  (data (i32.const 1048576) "\03\00\00\00"))

@alexcrichton
Copy link
Contributor

Er hit submit a little too soon, but we ask for a 1MB (1048576) stack, so LLD is definitely allocating data at the start of a page rather than the end of the page.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants