Skip to content
This repository has been archived by the owner on Nov 20, 2018. It is now read-only.

Error on finalization - FineUploader S3 Concurrent Chunk Uploads #1519

Closed
jfpaulin opened this issue Jan 21, 2016 · 25 comments
Closed

Error on finalization - FineUploader S3 Concurrent Chunk Uploads #1519

jfpaulin opened this issue Jan 21, 2016 · 25 comments

Comments

@jfpaulin
Copy link

We are using FineUploader to upload big files directly to our Amazon S3 buckets. Here is my configuration :

fineUploaderReference = $("#fine-uploader").fineUploaderS3({
    debug:true,
    uploaderType:'basic',
    button:$('#attach-button'),
    maxConnections:5,
    validation:{sizeLimit:31457280000},
    retry:{enableAuto: true, autoAttemptDelay:5, maxAutoAttempts:50},
    chunking:{enabled: true, partSize:5*1024*1024,concurrent:{enabled:true}}, // 5mb
    resume:{enabled: true, recordsExpireIn:1},
    messages:{
        'onLeave':'Files are still uploading, are you sure you want to leave this page and cancel the upload?',
        'sizeError':'Free account are limited to: 29.297 GB, please login for more options',
        'emptyError':'File is empty'
    },
    signature:{endpoint:'******'},
    iframeSupport:{
        localBlankPagePath:'/empty.html'
    },
    request:{
        endpoint:'https://*****.s3.amazonaws.com',
        accessKey:'*******'
    }
});

Sometimes we got an error when trying to finalize the file on S3. Here is the error we receiving :

[Fine Uploader 5.5.0] Submitting S3 complete multipart upload request for 0 s3.jque...s?r=3.6 (line 16)
[Fine Uploader 5.5.0] Sending POST request for 0 s3.jque...s?r=3.6 (line 16)
POST https://**********.s3.amazonaws.com/14533..._8j14EDjuVHXXfg6wNHuIai84IBn58Ig_GUhZg5hOLT3tvns    200 OK        78ms    s3.jque...s?r=3.6 (line 17)
[Fine Uploader 5.5.0] Complete response status 200, body = 
InvalidPartOrderThe list of parts was not in ascending order. Parts must be ordered by part number.F0UOQTefZmRcg0L_P9tEe9SLxza7mInpsiXPPiR58ARb6Zh4UOI_wkh_t_Cc8eL.ZTe1NGxQiNeRFNkD_8j14EDjuVHXXfg6wNHuIai84IBn58Ig_GUhZg5hOLT3tvns02F41891E40C60F2+OSli4SW3A+ZKlJTk3EWSEcQHu4r6lGcbCfcLWKGcjOoaWC6h5hUTHCoLHEQap1VUPqUlXzVHFg= s3.jque...s?r=3.6 (line 16)
[Fine Uploader 5.5.0] Missing bucket and/or key in response to Complete Multipart Upload request for 0.

This seems to happen only if one of the chunk got an error during the upload even if it seems to resume/retry correctly. And I was able to reproduce the problem only when trying to upload big files (i.e. 5Go+).

Everything is working fine if I turn off the concurrent chunk upload. Even if some chunks got an error during the upload, the resume/retry is working correctly and the file can be finalized correctly at the end.

Let me know if you need more details.

@rnicholus
Copy link
Member

Sounds like it could be a bug. Could be that when a chunk fails with concurrent chunking enabled, there is a bug that results in an unordered list of parts in the digest request sent to S3 after all chunks have been uploaded. After all parts are uploaded, Fine Uploader sends a "complete multipart upload" request. The body of that request contains IDs (ETags, to be specific) for each chunk, and they are supposed to be in order. I'll look into this in the near future.

@rnicholus
Copy link
Member

@jfpaulin Can you provide me with Fine Uploader JS/browser logs for the point when the chunk upload fails and then successfully retries? I'm also interested in the request body for the Complete Multipart Upload POST request that Fine Uploader sends after all parts have been uploaded.

I don't see how the parts could be out of order, as they are explicitly ordered just before the Complete POST is sent. Perhaps something is going wrong with the numbering of the parts. I suspect that, after an out-of-order chunk fails, when its index is added back to the "remaining chunk indexes" array, something unexpected is happening before that chunk is retried. The logs I have requested from you may shed some light on this.

@rushimusmaximus
Copy link

We've been getting reports recently of +5GB files failing too (also with s3/concurrent chunk setup). Unfortunately I haven't been able to personally recreate the issue, after several attempts on multiple browsers. I did get some logs from someone that had the problem that I hope might be helpful. (This is in version 5.1.3.)

[Error] Failed to load resource: The network connection was lost. (...S3 URL...)
[Error] TypeError?: undefined is not an object (evaluating 'e.upload')
_registerProgressHandler (anonymous function) s3.jquery.fine-uploader.min.js 17:5363
(anonymous function) s3.jquery.fine-uploader.min.js 18:24101
each s3.jquery.fine-uploader.min.js 15:8058
success s3.jquery.fine-uploader.min.js 15:12332

I'm going to keep testing, seeing if I can get it to happen for me.

@rnicholus
Copy link
Member

5.1.3 is quite old. I'll either need to recreate myself or get access to the logs mentioned in my last post in order to get to the bottom of this. I'll take a closer look sometime next week.

@rushimusmaximus
Copy link

@rnicholus Yeah, I figured you'd say that ;) We're releasing an update with to 5.5 "real soon now". The line numbers might not be helpful to yo, but I was hoping the general error might provide some insight. Basically it seems like this may happen when there are network errors. I wasn't able to reproduce this today, but am in contact with someone who thinks they can do it reliably, so I'm going to work on getting you some better information.

@rnicholus
Copy link
Member

@rushimusmaximus The error message in your comment above seems to be related to the network connection issue on your end, and is otherwise benign as far as I can tell. I suspect that the original issue reported here is related to assigning part indexes to retried out-of-order chunks.

@jfpaulin
Copy link
Author

jfpaulin commented Feb 1, 2016

@rnicholus sorry for the delay, I was out of the office last week. I will try to get those logs for you this week.

@jfpaulin
Copy link
Author

jfpaulin commented Feb 1, 2016

@rnicholus I attached two files. The first one is 'all.logs.txt', this contains all the console log for my failed uploads. I attached all the log to be sure that you have all you need since this is hard to reproduce. You can see the first chunk error around the line number 9430.

The second file, 'complete.post.txt' contains the request headers for the complete multipart post and the body for this post.

What was weird, is that that time, I had to force a resume on the file. Because after uploading the last chunk, everything stopped ... No more progress, no more post/put and no more logs in the console. So I force a resume of the file by refreshing the page and selecting the same file then I was able to get the error for the complete post..

I'll try to reproduce it again without interfering, but I'm not sure if that change anything.

all logs.txt
complete post.txt

@jfpaulin
Copy link
Author

jfpaulin commented Feb 1, 2016

@rnicholus I also noticed that when resuming the file, FineUploader uploaded the chunk number that failed.. Then do the complete multipart post which failed.

Let me know if you need more details.

@rnicholus
Copy link
Member

Chunks 1344-1347 initially failed due to issues on S3 (these are surprisingly common), but then recovered. Looking at the complete POST request, part number 1349 is duplicated:

<Part>
   <PartNumber>1349</PartNumber>
   <ETag>"ea1968a50c26c099597724727c36abc8"</ETag>
</Part>
<Part>
   <PartNumber>1349</PartNumber>
   <ETag>"ea1968a50c26c099597724727c36abc8"</ETag>
</Part>

Seems like this is causing S3 to reject the complete POST.

It's not clear why that specific part number was repeated. That chunk did not fail, but it is close in range to the chunks that did and were retried. I'll have to look closer.

@jfpaulin
Copy link
Author

jfpaulin commented Feb 2, 2016

@rnicholus Do you think that the fact the file upload stopped after uploading the last chunk is also related to that problem ?

@rnicholus
Copy link
Member

Not sure. I don't see anything unusual in the logs. May have just been a pending request to S3 that fine uploader was waiting for.

@jfpaulin
Copy link
Author

jfpaulin commented Feb 2, 2016

Ok. Because I'm still making tests on this, and everytime I get that problem, the upload stop and I have to force a resume on it. I can try to do it again and send you the post/put logs for the last chunks ?

@rnicholus
Copy link
Member

@jfpaulin and @rushimusmaximus Is this something that can be easily reproduced by you or one of your customers? If yes, I'd like to consider giving you an updated version with some more logging so I can have an easier time determining why a part/etag is duplicated in a failure scenario. Otherwise I'll just push out a pre-release with some code that removes the any duplicates before multipart e "multipart complete" REST call to S3.

@rnicholus
Copy link
Member

You can disregard the last message, after looking at the logs even closer, I think I see what is causing a duplicate part number in the manifest. When an error on S3 forced a batch of concurrent upload requests to fail, one of those requests actually succeeded.

The batch consisted of part 1344 - 1348 as far as I can tell. This makes sense @jfpaulin since you have the maxConnections option set to 5. Parts 1344-1347 failed but 1348 somehow succeeded (or at least this is what the logs indicate). I mentioned that the manifest contained a repeated part number of 1349. Amazon, for inexplicable reasons, decided that part numbers must start at 1 instead of 0. So while Fine Uploader's core logging code reports this part as 1348, AWS sees it as 1349. Anyway, due to some apparent error in Fine Uploader's concurrent chunking code, 1348/1349 was uploaded again when the batch of files was retried. I'm not certain why this happened, but this is what resulted in a duplicate entry, since the same part was uploaded twice.

Uploading the same part twice in itself isn't a problem, but reporting the same part twice appears to be. I should probably determine why a seemingly successfully uploaded part was re-uploaded, and that will result in a solution to this problem I imagine.

@rnicholus
Copy link
Member

This is looking to be caused by a very unusual race condition. I'm not entirely certain about the chain of events, and reproducing looks to be difficult. I have not had any luck yet. The concurrent chunking feature is complex, as is the code. I'm hesitant to make any changes to the logic in this code unless I am 100% certain of the outcome and am certain that the change needs to be made. I also thought about simply checking the internal array that maintains the part numbers yet to be uploaded for a match before adding a new part number, but this check would occur often, and for very large files, this array could be quite large and examining it for a duplicate could be costly. Unless someone has some thoughts or insight regarding reliable reproduction, I may just opt for an easy "fix" that involves removing a duplicate entry in the "complete multipart" manifest just before this request is sent.

I really don't think this has any relation to the size of the file (6GB vs 500 MB), but I can see how the issue would be frustrating when a very large file fails to upload at the last step and is not retryable. Just out of curiosity, how often are your customers seeing this issue?

@rupert1073
Copy link

Hi Ray, thanks for all your work, our clients were having daily issue
concurrent chunks was on, the bigger the files the more often the issue
happen, i can reproduce it most of the time with file larger than 15G but
once in awhile the file goes thru. we'll be happy to help you with the
tests.

On Fri, Feb 5, 2016 at 3:47 AM, Ray Nicholus notifications@github.com
wrote:

This is looking to be caused by a very unusual race condition. I'm not
entirely certain about the chain of events, and reproducing looks to be
difficult. I have not had any luck yet. The concurrent chunking feature is
complex, as is the code. I'm hesitant to make any changes to the logic in
this code unless I am 100% certain of the outcome and am certain that the
change needs to be made. I also thought about simply checking the internal
array that maintains the part numbers yet to be uploaded for a match before
adding a new part number, but this check would occur often, and for very
large files, this array could be quite large and examining it for a
duplicate could be costly. Unless someone has some thoughts or insight
regarding reliable reproduction, I may just opt for an easy "fix" that
involves removing a duplicate entry in the "complete multipart" manifest.

I really don't think this has any relation to the size of the file (6GB vs
500 MB), but I can see how the issue would be frustrating when a very large
file fails to upload at the last step and is not retryable. Just out of
curiosity, how often are your customers seeing this issue?


Reply to this email directly or view it on GitHub
#1519 (comment)
.

@rnicholus
Copy link
Member

If this truly does take that large of a file to reproduce, then I will have difficulty reproducing myself due to limited bandwidth at my current location. I'm still not convinced that file size is really a factor. A request error is required to reproduce, and this is more likely to occur with much larger files somewhere along the way. That is probably why you are seeing this with larger files. Did you say you are able to reproduce yourself very easily? If so I'd like to be able to send you updates for testing/verification in order to get to the bottom of this as quickly as possible. Will that work for you?

@rupert1073
Copy link

Absolutely, it will be our pleasure to help you on this,

Le vendredi 5 février 2016, Ray Nicholus notifications@github.com a écrit
:

If this truly does take that large of a file to reproduce, then I will
have difficulty reproducing myself due to limited bandwidth at my current
location. I'm still not convinced that file size is really a factor. A
request error is required to reproduce, and this is more likely to occur
with much larger files somewhere along the way. That is probably why you
are seeing this with larger files. Did you say you are able to reproduce
yourself very easily? If so I'd like to be able to send you updates for
testing/verification in order to get to the bottom of this as quickly as
possible. Will that work for you?


Reply to this email directly or view it on GitHub
#1519 (comment)
.

@rnicholus
Copy link
Member

It will probably be easier to continue via email then. Can you contact me at [redacted]? I'll have you a build with some more logging and a first naive attempt at a fix sometime on Monday.

@rupert1073
Copy link

Hi @rnicholus , did you received my email?

@rnicholus
Copy link
Member

Yes, I did. I'm currently GMT+7, so I will be sure to respond with details in the morning. Thank you for your patience.

@rnicholus rnicholus changed the title FineUploader S3 Concurrent Chunk Uploads Error on finalization - FineUploader S3 Concurrent Chunk Uploads Feb 13, 2016
@rnicholus rnicholus added this to the 5.5.1 milestone Feb 13, 2016
@rnicholus
Copy link
Member

This issue now has my full attention, and as soon as I fix it, I'll push out a 5.5.1 release. This is a tricky one, so bear with me.

rnicholus added a commit that referenced this issue Feb 13, 2016
Also added some more logging in case this doesn't work.
#1519
rnicholus added a commit that referenced this issue Feb 17, 2016
…e still waiting for signature

This should prevent a chunk upload that has not yet called xhr.send() from starting if it has been cancelled by error handling code.
#1519
@rnicholus
Copy link
Member

5.5.1-3 may have the fix for this issue. I have one person already testing, but was wondering if anyone else is interested in verifying as well? Since this change touches the most complex code in the library, more testers is a good thing.

@rnicholus
Copy link
Member

This has been released as 5.5.1, now on npm and also available via the website.

fragilbert added a commit to fragilbert/file-uploader that referenced this issue Aug 10, 2019
* refactor(concurrent chunking): too much logging
FineUploader#1519

* fix(concurrent chunking): account for cancelled chunk uploads that are still waiting for signature
This should prevent a chunk upload that has not yet called xhr.send() from starting if it has been cancelled by error handling code.
FineUploader#1519

* docs(concurrent chunking): prepare for 5.5.1 release
Removed temporary logs.
fixes FineUploader#1519

* docs(issues and PRs): first set of issue/PR templates
[skip ci]

* fix(s3): form uploader may not verify upload w/ correct bucket name
fixes FineUploader#1530

* docs(delete files): clarify `forceConfirm` option.
fixes FineUploader#1522

* docs(traditional): broken link to chunking feature page

* chore(release): prepare for 5.5.2 release

* docs(options): session option typo

[skip ci]

* feat(initial files): Allow initial files to be added via the API
Introduces a new API method - addInitialFiles.
closes FineUploader#1191

* docs(initial files): Update initial files feature page
closes FineUploader#1191
[skip ci]

* feat(button.js): Allow <input type="file"> title attr to be specified
This also account for extraButtons.
closes FineUploader#1526

* docs(options): typos
[skip ci]
FineUploader#1526

* docs(README): semver badge link is broken, not needed anymore anyway
[skip ci]

* chore(build): prepare for 5.6.0 release
[skip ci]

* chore(build): trivial change to re-run travis build

* docs(delete file): clearer docs for proper callback use

* chore(build): remove PR branch check - just gets in the way now

* chore(php): update local testing env to latest Fine Uploader PHP servers

* feat(Edge): DnD support for Microsoft Edge
Also adjusted an unreliable test. This was tested in Edge 13.10586.
FineUploader#1422

* docs(Edge): List Edge 13.10586+ as a supported browser
FineUploader#1422

* chore(build): Mark build 1 of 5.7.0
FineUploader#1422

* chore(build): prepare for 5.7.0 release
FineUploader#1422

* Pass xhr object through error handlers

* chore(release): prepare for 5.7.1. release
FineUploader#1599

* chore(release): build release branches too

* docs(contributing): attempt to re-integrate clahub.com

* feat(commonjs): CommonJS + AMD support
FineUploader#789

* feat(CryptoJS, ExifRestorer): Move 3rd-party deps to qq namespace
FineUploader#789

* refactor(concat.js): cleanup banner/footer for bundles
FineUploader#789

* fix(concat.js): don't add modules JS to css files
Also removed bad package.json main property.
FineUploader#789

* fix(concat.js): lint errors
FineUploader#789

* feat(CommonJS): more elegant importing of JS/CSS
FineUploader#789

* chore(build): prepare to publish 5.8.0-1 pre-release
FineUploader#789

* chore(build): prepare to publish 5.8.0-beta1 pre-release
FineUploader#789

* docs(README): gitter chat shield

* feat(build): better name for modern row-layout stylesheet
Only used in new lib directory.
FineUploader#1562

* docs(modules): Feature page and links to feature in various places
FineUploader#1562

* docs(version): Prepare for 5.8.0 release
FineUploader#1562

* fix(build): node version is too old to run updated build

* docs(features): Link to modules feature page.
FineUploader#1562

* fix(build): we are tied to node 0.10.33 ATM
FineUploader#1562

* chore(MIT): start of switch to MIT license
FineUploader#1568

* chore(MIT): better build status badge
FineUploader#1568

* docs(README): horizontal badges

FineUploader#1568

* status(README): license and SO badges
FineUploader#1568

* docs(README): fix license badge

FineUploader#1568

* chore(build): update license on banner
FineUploader#1568

* docs(README): add contributing section
FineUploader#1568

* chore(build): install grunt-cli
FineUploader#1568

* chore(git): ignore iws files
FineUploader#1568

* docs(index): update index page & footer
FineUploader#1568

* docs(support): simplify menu
FineUploader#1568

* docs(README): more info on contributing

* docs(README): grammar

* fix(spelling): various typos in tests, comments, docs, & code 

FineUploader#1575

* chore(build): start of 5.10.0 work

* feat(scaling): Allow an alternate library to be used to generate resized images

FineUploader#1525

* docs(scaling & thumbnails): 3rd-party scaling doc updates (FineUploader#1586)

FineUploader#1576

* chore(build): prepare for 5.10.0 release

* fix(session): Session requester ignores cors option (FineUploader#1598)

* chore(build): start of 5.11.0 changes
FineUploader#1598

* docs(events.jmd): typo

[skip ci]

* docs(delete.jmd): add missing backtic
[skip ci]

* docs(delete.jmd): add missing backtic
[skip ci]

* fix(image): Fixed a problem where image previews weren't being loaded (FineUploader#1610)

correctly if there was a query string in the URL

* docs(README): fix Stack Overflow badge

[skip ci]

* docs(README): fix Stack Overflow badge

[skip ci]

* chore(build): prepare for 5.10.1 release
[skip ci]

* docs(features.jmd): Document S3 Transfer Acceleration to S3 feature (FineUploader#1627)

Also removed mention of CF craziness. 
closes FineUploader#1556 
closes FineUploader#1016

* feat(build) drop grunt, core & dnd builds (FineUploader#1633)

closes FineUploader#1569 
closes FineUploader#1605 
close FineUploader#1581 
closes FineUploader#1607

* Revert "FineUploader#1569 build cleanup" (FineUploader#1634)

* feat(build) drop grunt, core & dnd builds (FineUploader#1635)

closes FineUploader#1569 
closes FineUploader#1605 
closes FineUploader#1581 
closes FineUploader#1607

* docs(README.md): better build instructions

FineUploader#1569

* refactor(Makefile): caps to lower-case
FineUploader#1569

* fix(Makefile): bad syntax in publish recipe
FineUploader#1569

* feat(Makefile): more comprehensive publish recipe
FineUploader#1569

* fix(CommonJS): missing core aliases
fixes FineUploader#1636

* fix(CommonJS): traditional should be default
fixes FineUploader#1636

* docs(modules.jmd): mention core builds, fix script paths
fixes FineUploader#1636

* docs(modules.jmd): more script path fixes
fixes FineUploader#1636

* fix(lib/core): wrong path for core module `require` statements
fixes FineUploader#1637

* chore(Makefile): allow publish simulation
`make publish simulation=true`

* fix(Makefile): traditional endpoint jquery js files missing
fixes FineUploader#1639

* fix(Makefile): traditional endpoint jquery js files missing from zip
fixes FineUploader#1639

* docs(README.md): better quality logo

[skip ci]

* docs(README.md): remove freenode chat badge

[skip ci]

* fix(Makefile): jQuery S3 & Azure are missing plug-in aliases
fixes FineUploader#1643

* fix(Makefile): jQuery S3 & Azure are missing plug-in aliases
fixes FineUploader#1643

* feat(getting started): better getting started guide (FineUploader#1651)

FineUploader#1646

* docs(README.md): update changelog link

[skip ci]

* docs(README.md): update changelog link

[skip ci]

* docs(README.md): remove freenode chat badge

[skip ci]

* fix(Makefile): uploader doesn't load in IE8/9
fixes FineUploader#1653

* fix(azure/uploader.basic): customHeaders omitted from delete SAS

closes FineUploader#1661

* chore(build): prepare for 5.11.8 release
FineUploader#1661

* fix(s3-v4) Invalid v4 signature w/ chunked non-ASCII key (FineUploader#1632)

closes FineUploader#1630

* chore(build): start of 5.11.9 release work
FineUploader#1632

* chore(Makefile): make it easier to start local test server

* fix(request.signer.js): Client-side signing errors don't reject promise (FineUploader#1666)

This is yet another instance that details why I would like to rip out `qq.Promise` and instead depend on native `Promise` (or require a polyfill).

* Update docs for retry feature (FineUploader#1675)

event onRetry => onAutoRetry

* docs(getting started): page 1 code example typos (FineUploader#1677)

closes FineUploader#1676

* docs(getting started): page 1 code example typos (FineUploader#1677)

closes FineUploader#1676

* docs(forms.jmd): Minor edit to fix invalid example code (FineUploader#1679)

[skip ci]

* docs(options-s3.jmd): Remove console.log on S3 Options page (FineUploader#1681)

[skip ci]

* fix(upload.handler.controller): deleted file doesn't fail fast enough
Now, if a zero-sized chunk is detected (which happens if the file is deleted or no longer available during the upload), the upload will be marked as failing.
fixes FineUploader#1669

* fix(uploader.basic.api.js): file marked as retrying too late
Should happen before the wait period, not after.
fixes FineUploader#1670

* docs(statistics-and-status-updates.jmd): update retrying status def
FineUploader#1670
[skip ci]

* docs(options-ui.jmd): remove duplicate option
FineUploader#1689
[skip ci]

* docs(options-azure.jmd): typo
FineUploader#1689
[skip ci]

* docs(qq.jmd): invalid docs for qq.each iterable param
FineUploader#1689
[skip ci]

* docs(02-setting_options-s3.jmd): Add comma to object literal (FineUploader#1694)

(now the snippet is valid JavaScript)

* fix(Makefile): identify.js included twice (FineUploader#1691)

This does not appear to cause any issues, but it does inflate the size of all built JS files a bit.

* fix(Makefile): $.fineUploaderDnd missing from jQuery builds
fixes FineUploader#1700

* chore(build): field testing for 5.11.10 before release
FineUploader#1691
FineUploader#1700

* chore(build): release 5.11.10
FineUploader#1691
FineUploader#1700

* docs(README.md): add twitter shield

[skip ci]

* Update dependencies to enable Greenkeeper 🌴 (FineUploader#1706)

* docs(README.md): add twitter shield

[skip ci]

* chore(package): update dependencies

https://greenkeeper.io/

* chore(package.json): start of v5.12.0

[skip ci]

* chore(version.js): start of v5.12.0

* feat(validation): Allow upload with empty file (FineUploader#1710)

Don't reject an empty file if `validation.allowEmpty` is set to `true`.
closes FineUploader#903 
closes FineUploader#1673

* chore(Makefile): test servers may not start without changes

* Update karma to the latest version 🚀 (FineUploader#1721)

* chore(package): update clean-css to version 3.4.24 (FineUploader#1723)

https://greenkeeper.io/

* feat(request-signer.js): Allow signature custom error messages (FineUploader#1724)

Update S3 request signer to use `error` property on response if set.
Includes docs + tests.

* chore(package.json): upgrade to clean-css 4.x
closes FineUploader#1732

* chore(version.js): forgot to update all files w/ new version
closes FineUploader#1732

* chore(package.json): update karma to version 1.4.1 (FineUploader#1736)

https://greenkeeper.io/

* feat(uploader.basic.api.js): removeFileRef method (FineUploader#1737)

When called, this deleted the reference to the Blob/File (along with all other file state tracked by the upload handler).
closes FineUploader#1711

* docs(methods.jmd): document removeFileRef method
closes FineUploader#1711

* feat(uploader.basic.api.js): intial setStatus() API method implementation
Initially, only qq.status.DELETED and qq.status.DELETE_FAILED are supported. All other statuses will throw. This can be used to mark a file as deleted, or to indicate that a delete attempt failed if you are using delete file logic outside of Fine Uploader's control. This will update the UI by removing the file if you are using Fine Uploader UI as well.
closes FineUploader#1738

* chore(build): 5.14.0-beta2
FineUploader#1738

* docs(methods.jmd): Mention the only statuses that are valid ATM
closes FineUploader#1739
[skip ci]

* docs(methods.jmd): invalid character in setStatus signature
FineUploader#1739
[skip ci]

* chore(package): update clean-css-cli to version 4.0.5 (FineUploader#1746)

Closes FineUploader#1745

https://greenkeeper.io/

* fix(Makefile): npm path not properly set for cygwin (FineUploader#1698)

Detect npm-path properly in cygwin (fixes windows build). Looks for '_NT' to detect if we are on cygwin or not.

* chore(package): update clean-css-cli to version 4.0.6 (FineUploader#1749)

https://greenkeeper.io/

* feat(fine-uploader.d.ts): initial Typescript definitions (FineUploader#1719)

This includes: 
* Typescript definition file that covers the entire API.
* Updated Makefile to include typescript directory in build output.
* typescript/fine-uploader.test.ts.
* Various documentation fixes.

* chore(build): prepare for 5.14.0-beta3 release
[skip ci]

* Improve issue templates + move support back to issue tracker (FineUploader#1754)

* chore(package): update clean-css-cli to version 4.0.7 (FineUploader#1753)

https://greenkeeper.io/

* chore(build): prepare for 5.14.0 release

* docs(amazon-s3.jmd): missing linefeeds in v4 signing steps

[skip ci]

* docs(amazon-s3.jmd): 2nd attempt at fixing nested list in v4 sig section

[skip ci]

* Add missing return definition

My env was complaining about implicit any.

* docs(navbar.html): prepare for switch to SSL
 [skip ci]

* docs(navbar.html): SSL not enabled yet for fineuploader.com
CloudFlare will redirect to HTTPS once it's ready anyway.
 [skip ci]

* docs(navbar.html): prepare for switch to SSL
 [skip ci]

* docs(async...jmd): typo in shitty home-grown promise impl docs
 [skip ci]

* chore(package): update karma to version 1.5.0 (FineUploader#1762)

https://greenkeeper.io/

* chore(build): generate docs to docs repo on Travis (FineUploader#1769)

This will eventually replace the dedicated Dreamhost server.

* chore(Makefile): ensure root of docs mirrors /branch/master
FineUploader#1770

* chore(Makefile): ensure root of docs mirrors /branch/master
FineUploader#1770

* chore(package): update clean-css-cli to version 4.0.8 (FineUploader#1771)

https://greenkeeper.io/

* chore(build): prepare for 5.14.1 release
FineUploader#1759

* chore(package): update karma-spec-reporter to version 0.0.27 (FineUploader#1773)

https://greenkeeper.io/

* chore(package): update karma-spec-reporter to version 0.0.29 (FineUploader#1774)

https://greenkeeper.io/

* chore(package): update karma-spec-reporter to version 0.0.30 (FineUploader#1775)

https://greenkeeper.io/

* feat(docs/navbar): easy access to docs for specific version

FineUploader#1770

* docs(main.css): tag-chooser mis-aligned on mobile

* chore(Makefile): use "released" version of FineUploader/docfu
FineUploader#1770

* chore(Makefile): use "released" version of FineUploader/docfu (1.0.2)
FineUploader#1770

* chore(package): update uglify-js to version 2.8.0 (FineUploader#1780)

https://greenkeeper.io/

* chore(package): update uglify-js to version 2.8.1 (FineUploader#1781)

https://greenkeeper.io/

* chore(package): update uglify-js to version 2.8.2 (FineUploader#1782)

https://greenkeeper.io/

* chore(package): update uglify-js to version 2.8.3 (FineUploader#1783)

https://greenkeeper.io/

* fix(uploader.basic.api.js): onStatusChange called too early

onStatusChange is called for initial/canned files before internal state for the file is completely updated. This change introduces an update that makes it easy for internal users of the upload-data service to defer the status change broadcast until all processing and state updates are complete.
fixes FineUploader#1802
fixes FineUploader/react-fine-uploader#91

* test(package): lock pica to v2.0.8 (FineUploader#1818)

Pica recently released v3.0.0, with a breaking change which our tests rely on. This
commit locks pica to the last stable version in order to make the test suite pass again.

* docs(LICENSE): clarify copyright history

[skip ci]

* docs(LICENSE): clarify copyright history

[skip ci]

* docs(README): + cdnjs badge

[skip ci]

* Update TypeScript definitions S3/Azure properties with default values to be optional (FineUploader#1830)

Change from required to optional for certain properties that FineUploader automatically provides defaults

* Added Open Collective to README.md and in npm postinstall (FineUploader#1795)

* Have to replace docs token after Travis' major security fuckup

* Work around Travis-CI key issue, sigh

* prepare for 5.14.3 release

* minor spelling/method name correction (FineUploader#1851)

* not using open collective anymore

[skip ci]

* not using open collective anymore

* prepare for 5.14.4 release

* Fix extraButtons typescript define not correct (FineUploader#1850)

fix in TS definitions where extraButtons option wasn't allowing to pass an array of ExtraButtonsOptions

* fix(azure): multi-part upload to S3 fails on Edge >= 15

fixes FineUploader#1852

* chore(build): prepare for 5.14.5 release
FineUploader#1852
FineUploader#1859

* docs(README.md): new maintainer notice

[skip ci]

* Updated Typescript to support canonical imports (FineUploader#1840)

* Updated Typescript to support canonical imports

* UI Options now extend core options

* constructor removed from interfaces that are function types, some parameters that provide default values changed to optional

Reverting some of the function type interface declarations to what they
were before to not use constructors. Some typo fixes

* Extra Buttons type fixed, code formatting

* Test file updated to demonstrate proper syntax usage

* TypeScript added to docs

Update to the documentation highlighting proper TypeScript usage
according to changes from this PR

* Adding abnormally missed text in previous commit

* Updated version number for next relaese (FineUploader#1891)

* no longer accepting support requests

* Fixes FineUploader#1930 documentation issue (FineUploader#1932)

* Fixes FineUploader#1930: fix documentation errors

Adds missing `document.` before `getElementById` calls.

* Adds missing `document.` to `getElementById` calls

* Remove Widen as sponsor

* feat(S3): Allow serverside encryption headers as request params (FineUploader#1933)

fixes FineUploader#1803

* fix(templating): correctly handle tables (FineUploader#1903)

When adding new rows to a table, the existing mechanism of storing the HTML of
a row in variable breaks at least on firefox: When parsing this HTML fragment
and creating DOM elements, the browser will ignore tags it does not expect
without proper parent tags. Appending this modified DOM branch to the table
results in broken DOM structure. Cloning the DOM branch of a row and appending
this clone to the table works just fine.

fixes FineUploader#1246

* prepare to release 5.15.1

* fix(Makefile): wrong case of `client/commonJs` (FineUploader#1828)

* fix(dnd): ignore non-file drag'n'drop events (FineUploader#1819)

Since we only deal with files, it makes sense to ignore all events non-file related (eg. dragging
plaintext). This commit fixes a few things that have changed in the browsers which subtly break
the current checks.

* The `contains` function on `dt.files` has been removed from the spec and will always return
  undefined. Except for IE, which hasn't implemented the change.
  * Chrome and Firefox have replaced it with `includes`, which we now use
  * We've left a `contains` check in there for IE as a last resort
  * Remove the comment about it being Firefox only, since it also works in Chrome now
  * More info re: removal at: https://github.com/tc39/Array.prototype.includes#status

* The dt.files property always seems to be an empty array for non-drop events. Empty arrays are
  truthy, and so this will always satisfy the `isValidFileDrag` check before it can validate that
  the types array includes files
  * It will now only be truthy if the files array actually contains entries

* There is a drop handler which binds to the document and always prevents all default drop
  behaviour from occurring, including things like dropping text into textfields
  * It will now only prevent default behaviour for file drops, which has the handy side-effect
    of preventing the page from navigating to the dropped file if the user misses the dropzone.

Fixes FineUploader#1588

* prepare for 5.15.2 release

* prepare for 5.15.3 release

* fix(dnd.js): Firefox drag area flickers (FineUploader#1946)

Removes Firefox check in leavingDocumentOut.
fixes FineUploader#1862

* prepare for 5.15.4 release

* bloburi param in doc now matches code (FineUploader#1950)

Minor edit to get the docs to match the code on the SAS request params.

* fix(templating.js): reset caused duplicate template contents (FineUploader#1958)

fixes FineUploader#1945

* prepare for 5.15.5 release

* fix(uploader.basic.api.js): auto-retry count not reset on success (FineUploader#1964)

fixes FineUploader#1172

* more maintainers

[skip ci]

* fix(dnd.js): qqpath wrong if file name occurs in parent dir (FineUploader#1977)

fixes FineUploader#1976

* feat(uploader.basic.js): more flexible server endpoint support (FineUploader#1939)

* Local dev/testing ports 3000/3001 clash with my local env, and possibly others - moving to 4000/4001.

* returned onUploadChunk promise can override method, params, headers, & url
* promissory onUpload callback

* always ensure test server are killed either on test start or stop

* don't try to kill test server on CI before tests start

* option to allow upload responses without { "success": true }

* allow default params to be omitted from upload requests

* don't fail upload w/ non-JSON response when requireSuccessJson = false

* more flexible chunking.success request support

* add .editorconfig (can't believe this didn't exist until now)

* Allow custom resume keys and data to be specified.

* include customResumeData in return value of getResumableFilesData API method

* add isResumable public API method

* introduce chunking.success.resetOnStatus to allow FU to reset a file based on chunking.success response code

* new API method: isResumable(id)

* Allow onUpload resolved Promise to pause the file.
Use case: When onUpload is called, you make a request to your server to see if the file already exists. If it does, you want to let your user decide if they want to overwrite the file, or cancel the upload entirely. While waiting for user input you don't want to hold a spot in the upload queue. If the user decided to overwrite the file, call the `continueUpload` API method.

* Allow per-file chunk sizes to be specified.
chunking.partSize now accepts a function, which passes the file ID and size

* feat(beforeUnload): new option to turn off beforeUnload alert during uploads

* feat(features.js): auto-detect folder support

* Allow access to Blob when file status is still SUBMITTING

* docs: options, API, and events doc updates

* added qq.status.UPLOAD_FINALIZING - don't cancel or pause in this state

closes FineUploader#848
closes FineUploader#1697
closes FineUploader#1755
closes FineUploader#1325
closes FineUploader#1647
closes FineUploader#1703

* fix(various): misc areas where null values may cause problems (FineUploader#1995)

* fix(upload.handler.controller): missing null-check in send()

* docs: fix Amazon S3 v4 signature guide (FineUploader#1998)

* docs: fix Amazon S3 v4 signature guide

* docs: s3 v4 signature

* docs(s3): fix chunked upload v4 (FineUploader#2001)

* Clarified that callbacks are FineUploader option. (FineUploader#2014)

* Call target.onload() only when defined (FineUploader#2056)

This removes the "Uncaught TypeError: target.onload is not a function" console error while img preview

* consider Firefox when checking for Android Stock Browser (FineUploader#1978) (FineUploader#2007)

* feat(dnd.js): add dragEnter and dragLeave callbacks (FineUploader#2037)

* feat(dnd.js): add dragEnter and dragLeave callbacks

* add dragEnter/dragLeave doc

* fix(Makefile): smart_strong import error on docfu install (FineUploader#2068)

Fixed in docfu 1.0.4, which locks us to python-markdown 2.6.11 (the last version to include smart_strong).
https://github.com/FineUploader/docfu/releases/tag/1.0.4

* fix logo

* not looking for new maintainers anymore

* bye bye fine uploader!
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants