Skip to content

Latest commit

 

History

History
578 lines (439 loc) · 28.4 KB

2020-06-10.md

File metadata and controls

578 lines (439 loc) · 28.4 KB

< 2020-06-10 >

1,283,982 events, 636,570 push events, 1,016,826 commit messages, 75,406,435 characters

Wednesday 2020-06-10 00:38:32 by maikCyphlock

This is the explanation of why I am learning to code

I relied on this to create my project this is not my own code, i just read that and try to understand it and yeah i did it in my first month, it was amazing for me, so I get confidence after that, i did my v1 for text-to-speech by my friends in the university, so I modified by creating new element like english and spanish button, adding my university's logo


Wednesday 2020-06-10 02:15:09 by Unit 0016

Outpost 255: The Return Of Outpost

Adds: More FUCKING Cameras Medical is now not just a formality and a second fucking prison. Removed glass floors. I'm not sure why they're broken, but I'd blame goonstation. Adds the teleporter and hand teleporters. Now you can stop bitching. Captain's suit storage is now a thing. Merged the fucking pipes. Fixed the AI sat and added turrets. Malf AIs, Celebrate! Removed mysterious fleshy mass in the dogpen. Added the all holy bud. Added some arcade machines. Maintenance needs more spicing up.


Wednesday 2020-06-10 03:07:25 by Tom Lane

Reconsider the representation of join alias Vars.

The core idea of this patch is to make the parser generate join alias Vars (that is, ones with varno pointing to a JOIN RTE) only when the alias Var is actually different from any raw join input, that is a type coercion and/or COALESCE is necessary to generate the join output value. Otherwise just generate varno/varattno pointing to the relevant join input column.

In effect, this means that the planner's flatten_join_alias_vars() transformation is already done in the parser, for all cases except (a) columns that are merged by JOIN USING and are transformed in the process, and (b) whole-row join Vars. In principle that would allow us to skip doing flatten_join_alias_vars() in many more queries than we do now, but we don't have quite enough infrastructure to know that we can do so --- in particular there's no cheap way to know whether there are any whole-row join Vars. I'm not sure if it's worth the trouble to add a Query-level flag for that, and in any case it seems like fit material for a separate patch. But even without skipping the work entirely, this should make flatten_join_alias_vars() faster, particularly where there are nested joins that it previously had to flatten recursively.

An essential part of this change is to replace Var nodes' varnoold/varoattno fields with varnosyn/varattnosyn, which have considerably more tightly-defined meanings than the old fields: when they differ from varno/varattno, they identify the Var's position in an aliased JOIN RTE, and the join alias is what ruleutils.c should print for the Var. This is necessary because the varno change destroyed ruleutils.c's ability to find the JOIN RTE from the Var's varno.

Another way in which this change broke ruleutils.c is that it's no longer feasible to determine, from a JOIN RTE's joinaliasvars list, which join columns correspond to which columns of the join's immediate input relations. (If those are sub-joins, the joinaliasvars entries may point to columns of their base relations, not the sub-joins.) But that was a horrid mess requiring a lot of fragile assumptions already, so let's just bite the bullet and add some more JOIN RTE fields to make it more straightforward to figure that out. I added two integer-List fields containing the relevant column numbers from the left and right input rels, plus a count of how many merged columns there are.

This patch depends on the ParseNamespaceColumn infrastructure that I added in commit 5815696bc. The biggest bit of code change is restructuring transformFromClauseItem's handling of JOINs so that the ParseNamespaceColumn data is propagated upward correctly.

Other than that and the ruleutils fixes, everything pretty much just works, though some processing is now inessential. I grabbed two pieces of low-hanging fruit in that line:

  1. In find_expr_references, we don't need to recurse into join alias Vars anymore. There aren't any except for references to merged USING columns, which are more properly handled when we scan the join's RTE. This change actually fixes an edge-case issue: we will now record a dependency on any type-coercion function present in a USING column's joinaliasvar, even if that join column has no references in the query text. The odds of the missing dependency causing a problem seem quite small: you'd have to posit somebody dropping an implicit cast between two data types, without removing the types themselves, and then having a stored rule containing a whole-row Var for a join whose USING merge depends on that cast. So I don't feel a great need to change this in the back branches. But in theory this way is more correct.

  2. markRTEForSelectPriv and markTargetListOrigin don't need to recurse into join alias Vars either, because the cases they care about don't apply to alias Vars for USING columns that are semantically distinct from the underlying columns. This removes the only case in which markVarForSelectPriv could be called with NULL for the RTE, so adjust the comments to describe that hack as being strictly internal to markRTEForSelectPriv.

catversion bump required due to changes in stored rules.

Discussion: https://postgr.es/m/7115.1577986646@sss.pgh.pa.us


Wednesday 2020-06-10 04:19:56 by Toydotgam (OLD ACCOUNT)

Update License

Changed to the DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE


Wednesday 2020-06-10 04:22:49 by Jacob Quinn

Refactor CSV internals to produce fully mutable columns by default

This involves quite a bit, so I'll try to spell out some of the chunks of work going on here, starting with the key insight that kicked off all this work.

Previously, to get a "type-stable" inner parsing loop over columns, I had come to the conclusion you actually needed the same concrete typed column type, no matter the column type of the data. This led to the development of the typecode + tapes system that has served us pretty well over the last little while. The idea there was any typed value could be coerced to a UInt64 and thus every column's underlying storage was Vector{UInt64}, wrapped in our CSV.Column type to reinterpret the bits as Int64, Float64, etc. when indexed. This was a huge step forward for the package because we were no longer trying to compile custom parsing kernels for every single file with a unique set of column types (which could be fast once compiled, but with considerable overhead at every first run).

The disadvantage of the tape system was it left our columns read-only; yes, we could have made the regular bits types mutable without too much trouble, but it was hard to generalize to all types and we'd be stuck playing the "make CSV.Column be more like Array in this way" game forever. All those poor users to have tried mutating operations on CSV.Column not realizing they needed to make a copy.

While refactoring ODBC.jl recently, at one point I realized that with the fixed, small set of unique types you can receive from a database, it'd be nice to roll my own "union splitting" optimization and unroll the 10-15 types myself in order to specialize. This is because the core Julia provided union-splitting algorithm will bail once you hit 4 unique types. But in the case of ODBC.jl, I knew the static set of types and unrolling the 12 or so types for basically setindex! operations could be a boon to performance. As I sat pondering how I could reach into the compiler with generated functions or macros or some other nonsense, the ever heroic Tim Holy swooped in on my over-complicated thinking and showed me that just writing the if branches and checking the objects against concrete types is exactly the same as what union-splitting does. What luck!

What this means for CSV.jl is we can now allocate "normal" vectors and in the innermost parsing loop (see parserow in file.jl), pull the column out of our columns::Vector{AbstractVector} and check it against the standard set of types we support and manually dispatch to the right parsing function. All without spurious allocations or any kind of dynamic dispatch. Brilliant!

In implementing this "trick", I realized several opportunities to simplify/clean things up (net -400 LOC!), which include:

  • Getting rid of iteration.jl and tables.jl; we only need a couple lines from each now that we are getting rid of CSV.Column and the threaded-row iteration shenanigans we were up to
  • Getting rid of the typecodes we were using as our pseudo-type system. While this technically could have stayed, I opted to switch to using regular Julia types to streamline the process from parsing -> output, and hopefully pave the way to supporting a broader range of types while parsing (see #431).
  • Move all the "sentinel array" tricks we were using into a more formally established (and tested) SentinelArrays.jl package; a SentinelArray uses a sentinel value of the underlying array type for missing, so it can be thought of as a Vector{Union{T, Missing}} for all intents and purposes. This package's array types will be used as solutions to using regular Vector{Union{T, Missing}} are worked out; specifically, how to efficiently allow a non-copying operation like convert(Vector{T}, A::Vector{Union{T, Missing}})

Probably most surprisingly with this PR is that it's pretty much non-breaking! I haven't finished the CategoricalArrays support just yet, but we'll include it for now as we get ready to make other deprecations in preparation for 1.0. The other bit I'm still mulling over is how exactly to treat string columns; in this PR, we just fully materialize them as Vector{String}, but that can be a tad expensive depending on the data. I'm going to try out some solutions locally to see if we can utilize WeakRefStringArray but will probably leave the option to just materialize the full string columns, since we've had people ask for that before.

If you've made it this far, congratulations! I hope it was useful in explaining a bit about what's going on here. I'd love any feedback/thoughts if you have them. I know it's a lot to ask anyone to review such a monster PR in their free time, but if you're willing, I'm more than happy to answer questions or chat on slack (#data channel) to help clarify things.


Wednesday 2020-06-10 06:15:46 by Abhinav Pillai

Create The_Number_Theory.java

Contains property checking for more than 50 types of numbers. Might get a bit messy trying to find a specific function, so I am providing the line numbers for the same : (P.S. - I have been compiling them since my high school years, so some of the display texts might be a bit over the top...)

6 - Prime Number 36 - Armstrong Number 65 - Palindrome Number 90 - Fibonacci Generator 109 - Buzz Number 128 - Perfect Number 155 - Prime Palindrome 202 - Special Two digit Number 228 - Automorphic Number 248 - Sunny Number 277 - Strong Number 308 - Disarium Number 338 - Magic Number 354 - Primorial of a Number 386 - Duck Number 414 - Harshad Number 439 - Kaprekar Number 474 - Unique Number 503 - Prime Factors of a Number 529 - GCD, followed by Co-Prime Number Check 559 - Neon Number 583 - Evil Number 624 - Smith Number 680 - HCF and LCM 702 - Nelson Number 733 - Amicable Pair 771 - Twin Prime Numbers 803 - Keith Number 857 - Lychrel Number [ NOT DONE YET] 871 - Self-describing Number 919 - Heavy Number 957 - Powerful Number 1024 - Ramanujan Number 1062 - Bouncy Number 1102 - Emirp Number 1149 - Abundant Number 1180 - Sphenic Number 1223 - Aliquot Sequence of a Number 1264 - Social Number 1309 - Chen Prime Number 1398 - Leyland Number 1441 - Safe Prime Number 1493 - Fascinating Number 1551 - Woodall Number 1578 - Deudeney Number 1623 - Thabit Number 1660 - Happy Number 1686 - Golden Ratio [ NOT DONE YET] 1696 - Pernicious Number 1743 - Devlali/Colombian Number 1784 - Sierpinkski Number [ NOT DONE YET] 1794 - Markov Number 1839 - Narcissistic Number 1867 - Star Number 1896 - Ugly/Hamming Number 1934 - Goldbach Number


Wednesday 2020-06-10 07:36:09 by GamingGeek

fuck you @Aktimoose I fucking hate you for fucking making me fucking change the emote that better get me on sassy commits so it tags Aktimoose lol


Wednesday 2020-06-10 08:17:04 by mirzi

improved eq and pl shade, eq text crust removal

fuck you maki


Wednesday 2020-06-10 09:07:45 by Panu Matilainen

Fix completely broken prefix search on sqlite backend

The prefix search was so wrong it's a small miracle it ever did anything at all. What have I been thinking? Well, I do remember thinking this prefix stuff looks kinda fishy but then it seems to work so...

The prefix search belongs to the keyed iterator fetch case of course, not the case where we're otherwise iterating over all keys.

Fixes: #1260


Wednesday 2020-06-10 09:21:58 by zlurker

UI Testing

Holy fuck why the fuck do I have to create a blueprint for every little single shit, I love blueprints in Infnity Dimension but what in the flying fuck is this


Wednesday 2020-06-10 09:25:53 by wm4

build: change/simplify optical disc device detection

So this had a long list of OS specific device files for CD and DVD physical devices. If we ever migrate to a new build system, this will be a PITA for rather questionable usefulness. For example, Linux does provide /dev/sr0, but it also creates /dev/cdrom and /dev/dvd symlinks. For Windows, it's sort of ridiculous to hardcode D: (well, might still work).

Get rid of this stuff, hardcode a single device name on Unix, keep D: on Windows. In general, this probably requires runtime detection in theory, so it's not much of a sin to move these from the build system to the code, even if we don't make use of determining the device at runtime yet.

This should not invite to add new ifdeffery to add device files for other Unix variants. Rather, if anyone wants to extend this, I'd argue they should add proper runtime detection of the device. Or, even better, create an appropriate symlink on their systems.


Wednesday 2020-06-10 09:27:24 by NansyT

Added calendar stuff

This fucking calendar shit should be fucking working now... God i'm so fucking tired of this... pls just fucking end this... I hate this.

TODO: Alerts when error occurs


Wednesday 2020-06-10 10:06:51 by Marko Grdinić

"9:20am. I am up.

9:40am. Let me just go for Ribbon Warrior and then I'll start. Honestly, I actually feel like reading the C# book more. Yesterday I read a few pages forward, and it started talking about user and kernel mode constructs which feels like something I should know about.

9:55am. That was fun. Let me go for the C# book. Then I'll work on the reactive pipeline. Baby steps and all that. I'll get through the block as long as I keep putting in effort. So that is what I am going to do.

10am. I am not going to stress myself over the progress I am making or set deadlines. I'll just do good 5h every day.

The System.Threading namespace offers an abstract base class called WaitHandle. The WaitHandle class is a simple class whose sole purpose is to wrap a Windows kernel object handle.

I had no idea about this.

10:05am.

A common usage of the kernel-mode constructs is to create the kind of application that allows only one instance of itself to execute at any given time.

Hmmmm, actually this could be useful for Spiral. Instead of starting the process explicitly, having only one would be good.

public static class Program {
    public static void Main() {
        Boolean createdNew;

        // Try to create a kernel object with the specified name
        using (new Semaphore(0, 1, "SomeUniqueStringIdentifyingMyApp", out createdNew)) {
            if (createdNew) {
            // This thread created the kernel object so no other instance of this
            // application must be running. Run the rest of the application here...
            } else {
            // This thread opened an existing kernel object with the same string name;
            // another instance of this application must be running now.
            // There is nothing to do in here, let's just return from Main to terminate
            // this second instance of the application.
            }
        }
    }
}

Interesting. Let me take a look at this. Yeah, I see now that the non-slim version of these constructs have names, and the createdNew parameters.

This is really great. Already reading the book is paying off.

10:30am. 823/896. So it turns out there is a reason why all the locking code allocates that extra object rather than using the class itself. I've been wondering about this!

10:40am.

It should be clear from this discussion that Monitor should not have been implemented as a static class; it should have been implemented like all the other locks: a class you instantiate and call instance methods on. In fact, Monitor has many other problems associated with it that are all because it is a static class.

Interesting.

The recommendation is not to use C#’s lock statement.

Oh, lol.

Holy shit, is the Monitor broken. They added an extra field for every object into the CLR, but it turned out to be a huge mistake.

11am.

The System.Threading.Barrier construct is designed to solve a very rare problem, so it is unlikely that you will have a use for it. Barrier is used to control a set of threads that are working together in parallel so that they can step through phases of the algorithm together.

Reminds me of Cuda's threadsync. In fact it is probably exactly like that.

11:20am. 834/896. Wow, this stuff is tricky, but I do get it.

Yeah, it is fine if I just finish this book for today.

The annoying thing is that all this complexity ends up being introduced due to compiler optimizations. It is ridiculous.

11:55am. 847/896. The rest is the appendix.

With this I am done with it.

I feel that my understanding of concurrency primitives went up a notch thanks to all of this. Today's morning and yesterday were much better than I expected they would be. I thought that concurrency would be a grindfest, but I am getting useful understanding out of this.

12:05pm. Now, let me stop here for a while. When it is time to resume, I will definitely do some prototyping.

No more scouting. The next move will be to start the attack."


Wednesday 2020-06-10 10:23:25 by Kevin Huang

Finally "fixed" the DNA and put it in the Wrap-Up scene

I basically taught myself about delegates and event handlers in order to implement "clicking". Basically, if you put your hand near a strand, it'll turn orange. If you touch it, it turns green, changing base. Once you match all the pairs, the halves slowly merge together. The highlights and the base symbols will disappear.

I could not implement rotation. Because rotation in VRTK seems to require basically a solid connection between what is being roated and the axis or joint it rotates around. In my opinion, rotation seems a bit senseless.

Some things to comment on: The highlighting is a bit wonky in the wrap-up scene. Specifically, the Near Touch highlights don't work. I think it might be because the player hands don't actually have the InteractNearTouch script attached. If we don't want to attach that, that's fine.

I also didn't put in the shader graph yet because I want to wait until everyone is done putting stuff in the wrap-up scene before I start experimenting with that.


Wednesday 2020-06-10 23:12:38 by Mark Mendoza

RFC: Support nominally named parameters preceding parameter spec components

Summary: While writing up the spec for Concatenate, I realized that I have been enforcing a requirement that doesn't actually need to exist.

It is correct that we can't concatenate named parameters onto parameter specifications. However: this need only apply to the external interface of the function. As long as we require that the "named" parameters being prepended are being called positionally, the function itself can be written to have named parameters.

Concretely, both

   def f(x: X, *args: _TParams.args, **kwargs: _TParams.kwargs) -> R: ...

and

   def g(x: X, \, *args: _TParams.args, **kwargs: _TParams.kwargs) -> R: ...

are valid instances of Callable[Concatenate[X, TParams], R]

The question is: what is the better user experience:

    def q(x: int) -> None: ...

    def f(
        fn: Callable[_TParams, _TReturn], # Option A:  error here and complain that prepended arguments must be positional-only
        *args: _TParams.args,
        **kwargs: _TParams.kwargs,
    ) -> _TReturn: ...

   f(fn=q, x=1) # Option B: Potentially confusingly error here complaining about missing first argument while it seems like you're correctly passing in a fn

This diff opts for B, as I feel like it's natural to add named parameters, especially when that named parameter is self. Requiring __ or \ seems too painful.

I think we can avoid the confusion in option B with some best-effort special casing on the error to explicitly mention this policy.

Reviewed By: shannonzhu

Differential Revision: D21945652

fbshipit-source-id: 8f215c52f7a4fbbbe821c942e1a88a3e25bdf867


Wednesday 2020-06-10 23:42:10 by Mike Ruberry

Changes TensorIterator computation to not consider out kwarg, lets UnaryOps safe cast to out (#39655)

Summary: BC breaking note:

In PyTorch 1.5 passing the out= kwarg to some functions, like torch.add, could affect the computation. That is,

out = torch.add(a, b)

could produce a different tensor than

torch.add(a, b, out=out)

This is because previously the out argument participated in the type promotion rules. For greater consistency with NumPy, Python, and C++, in PyTorch 1.6 the out argument no longer participates in type promotion, and has no effect on the computation performed.

ORIGINAL PR NOTE

This PR effectively rewrites Tensor Iterator's "compute_types" function to both clarify its behavior and change how our type promotion works to never consider the out argument when determining the iterator's "common dtype," AKA its "computation type." That is,

a = op(b, c)

should always produce the same result as

op(b, c, out=a)

This is consistent with NumPy and programming languages like Python and C++.

The conceptual model for this change is that a TensorIterator may have a "common computation type" that all inputs are cast to and its computation performed in. This common computation type, if it exists, is determined by applying our type promotion rules to the inputs.

A common computation type is natural for some classes of functions, like many binary elementwise functions (e.g. add, sub, mul, div...). (NumPy describes these as "universal functions.") Many functions, however, like indexing operations, don't have a natural common computation type. In the future we'll likely want to support setting the TensorIterator's common computation type explicitly to enable "floating ufuncs" like the sin function that promote integer types to the default scalar type. Logic like that is beyond the type promotion system, which can only review inputs.

Implementing this change in a readable and maintainable manner was challenging because compute_types() has had many small modifications from many authors over ~2 year period, and the existing logic was in some places outdated and in other places unnecessarily complicated. The existing "strategies" approach also painted with a broad brush, and two of them no longer made conceptual sense after this change. As a result, the new version of this function has a small set of flags to control its behavior. This has the positive effect of disentangling checks like all operands having the same device and their having the same dtype.

Additional changes in this PR:

  • Unary operations now support out arguments with different dtypes. Like binary ops they check canCast(computation type, out dtype).
  • The dtype checking for lerp was outdated and its error message included the wrong variable. It has been fixed.
  • The check for whether all tensors are on the same device has been separated from other checks. TensorIterators used by copy disable this check.
  • As a result of this change, the output dtype can be computed if only the input types are available.
  • The "fast path" for checking if a common dtype computation is necessary has been updated and simplified to also handle zero-dim tensors.
  • A couple helper functions for compute_types() have been inlined to improve readability.
  • The confusingly named and no longer used promote_gpu_output_dtypes_ has been removed. This variable was intended to support casting fp16 reductions on GPU, but it has become a nullop. That logic is now implemented here: https://github.com/pytorch/pytorch/blob/856215509d89c935cd1768ce4b496d4fc0e919a6/aten/src/ATen/native/ReduceOpsUtils.h#L207. Pull Request resolved: pytorch/pytorch#39655

Differential Revision: D21970878

Pulled By: mruberry

fbshipit-source-id: 5e6354c78240877ab5d6b1f7cfb351bd89049012


Wednesday 2020-06-10 23:47:26 by Zynox

Commit v1.5

What's new in v1.5?

  • Added new commands: Notice board, Events, Status, Ping, Commands
  • Fixed and added to previous commands: Help and info
  • New trigger words: Smiley faces, Hello, Hi, Welcome, Thanks, Thank you, Tysm, Ooo, Stfu
  • Improved Hehe and Haha but a shit ton, can guess it most of the time
  • Improved the call on FBot feature
  • Improved Fuck you added F u
  • Added a plain fuck trigger word
  • Improved Oof
  • Simplified some of the code
  • Fixed Gayscale, again
  • Fixed some of the wording
  • Fixed a minor error 495 lines -> 637 lines

< 2020-06-10 >