Skip to content

Latest commit

 

History

History
780 lines (600 loc) · 42.3 KB

2020-06-11.md

File metadata and controls

780 lines (600 loc) · 42.3 KB

< 2020-06-11 >

2,516,324 events, 1,267,069 push events, 2,059,051 commit messages, 152,180,465 characters

Thursday 2020-06-11 00:08:02 by memji

1 industry national idea left to complete

MY GOLDFISH DIED TODAY. I HATE MYSELF FOR LETTING HIM DIE.


Thursday 2020-06-11 00:08:02 by Matthew Ahrens

Remove unnecessary references to slavery

The horrible effects of human slavery continue to impact society. The casual use of the term "slave" in computer software is an unnecessary reference to a painful human experience.

This commit removes all possible references to the term "slave".

Implementation notes:

The zpool.d/slaves script is renamed to dm-deps, which uses the same terminology as dmsetup deps.

References to the /sys/class/block/$dev/slaves directory remain. This directory name is determined by the Linux kernel. Although dmsetup deps provides the same information, it unfortunately requires elevated privileges, whereas the /sys/... directory is world-readable.

Reviewed-by: Brian Behlendorf behlendorf1@llnl.gov Reviewed-by: Ryan Moeller ryan@iXsystems.com Signed-off-by: Matthew Ahrens mahrens@delphix.com Closes #10435


Thursday 2020-06-11 00:21:20 by Andrew Grosser

Added important disambiguation to swarm mode

This really needs to be added, I had no idea people gave up on docker/swarm because of a misunderstanding, but it's common enough we need to clarify it.

From Docker's public #swarm slack channel:

andrew grosser  4:45 PM
Hey @channel I am about to give a talk in San Francisco to a bunch of devops experts about swarm using my ingress and reverse proxy controller https://github.com/sfproductlabs/roo and one of the organizers said swarm was deprecated, is that so? It's so much easier than kubernetes, I can't imagine losing it.
sfproductlabs/roo
A zero config distributed edge-router & reverse-proxy (supporting multiple letsencrypt/https hosts). No dependencies.
Stars
40
Language
Go
<https://github.com/sfproductlabs/roo|sfproductlabs/roo>sfproductlabs/roo | Apr 9th | Added by GitHub
4:46
Is there something we don't know?
james_wells  4:48 PM
As of the most recent official Docker release, no Swarm is still officially part of Docker...  They merely added native support for Kubernetes
andrew grosser  4:49 PM
:pray: Phew, is there an EOL?
4:49
Thanks @james_wells
4:50
I think they going to get the grenade launchers out if I can't answer these questions
james_wells  4:51 PM
Now that is a good question and my guess is that no, there is no plan to remove it, at least before Docker 3.
andrew grosser  4:52 PM
Amazing thx, I have a system that is a startups dream and is personally saving me more than 10x using swarm, so praying it stays
bmitch:docker:  4:53 PM
Classic container deployed swarm is deprecated (I believe). Swarm mode that's integrated into the engine is still being developed by Mirantis with no EOL set.
4:53
So if someone says swarm is deprecated, make sure to ask "which swarm" they are referring to.
andrew grosser  4:54 PM
Ok thanks @bmitch
4:54
Think that's a brand thing we'll need to help change
james_wells  4:56 PM
@bmitch I am not sure I understand what you are sayin there.  Could you please explain the differences
bmitch:docker:  4:56 PM
See the disambiguation section: https://hub.docker.com/r/dockerswarm/swarm
james_wells  4:57 PM
Excellent.  Thank you sir
andrew grosser  5:02 PM
Thanks
bmitch:docker:  5:02 PM
See also this link where they are getting ready to archive the standalone swarm, aka classic swarm. https://github.com/docker/classicswarm/issues/2985#issuecomment-640486361
justincormackjustincormack
Comment on #2985 Why have all issues been closed?
The vast majority of issues were from 5 years ago when it was being actively developed, and the recent ones were all mistakes for swarmkit, other than some issues I resolved. Many were issues in components or Moby or other software and may be resolved. It is GitHubs (reasonable) recommendation that you close issues and PRs before archiving a repository so that people know they are not being worked on, and I was also looking to see if anyone came forward to say that they were still working on things or, indeed, actively using Swarm Classic.
<https://github.com/docker/classicswarm|docker/classicswarm>docker/classicswarm | Jun 8th | Added by GitHub
james_wells  5:08 PM
That is really unfortunate...  Kubernetes is simply too expensive IMNSHO, Swarm is nice and lightweight.
andrew grosser  5:08 PM
Both the different swarms point to the same point in the documentation in the disambiguation @bmitch
bmitch:docker:  5:09 PM
Swarm mode, aka swarmkit is alive and well.
andrew grosser  5:10 PM
Whoa I can see why they were confused
bmitch:docker:  5:10 PM
If you type docker swarm init you are not running classic swarm
andrew grosser  5:11 PM
Can someone inside docker add this to the swarm docs page? I think it's important
5:12
I think something talking about 2014 was EOLd but this is still current and alive would help.
bmitch:docker:  5:12 PM
Docker themselves isn't maintaining it, that team went to Mirantis, so someone over there would need to submit the PR
andrew grosser  5:12 PM
OK, could I?
bmitch:docker:  5:13 PM
Docs are in GitHub
andrew grosser  5:13 PM
Thanks

Thursday 2020-06-11 00:35:50 by Callum Hay

Added the chroma.js library to do all the heavy lifting for colour mixing / interpolation, format calculation, etc. Yeah, it's another library, yeah, I don't want to spend my time writing this shit myself. Started integrating chroma with a bunch of the older colour code, replacing my poorly functioning crap.


Thursday 2020-06-11 01:53:24 by Doug Anderson

pinctrl: Don't just pretend to protect pinctrl_maps, do it for real

commit c5272a28566b00cce79127ad382406e0a8650690 upstream.

Way back, when the world was a simpler place and there was no war, no evil, and no kernel bugs, there was just a single pinctrl lock. That was how the world was when (57291ce pinctrl: core device tree mapping table parsing support) was written. In that case, there were instances where the pinctrl mutex was already held when pinctrl_register_map() was called, hence a "locked" parameter was passed to the function to indicate that the mutex was already locked (so we shouldn't lock it again).

A few years ago in (42fed7b pinctrl: move subsystem mutex to pinctrl_dev struct), we switched to a separate pinctrl_maps_mutex. ...but (oops) we forgot to re-think about the whole "locked" parameter for pinctrl_register_map(). Basically the "locked" parameter appears to still refer to whether the bigger pinctrl_dev mutex is locked, but we're using it to skip locks of our (now separate) pinctrl_maps_mutex.

That's kind of a bad thing(TM). Probably nobody noticed because most of the calls to pinctrl_register_map happen at boot time and we've got synchronous device probing. ...and even cases where we're asynchronous don't end up actually hitting the race too often. ...but after banging my head against the wall for a bug that reproduced 1 out of 1000 reboots and lots of looking through kgdb, I finally noticed this.

Anyway, we can now safely remove the "locked" parameter and go back to a war-free, evil-free, and kernel-bug-free world.

Fixes: 42fed7ba44e4 ("pinctrl: move subsystem mutex to pinctrl_dev struct") Signed-off-by: Doug Anderson dianders@chromium.org Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org


Thursday 2020-06-11 02:31:33 by jadc

changed generator to be nicer for me, jad. fuck you


Thursday 2020-06-11 03:41:48 by Ał¡

Homework Assignment #6: Responsiveness

<title>Gourmet Chicken Pizza</title>
<main>

	<header id="intro">
	   <h1>Gourmet Chicken Pizza</h1>

	   <section>
		  <p>Here is a chicken pizza recipe that you may love. We do. We used to purchase this already prepared for the oven, so now I have come up with my own recipe. A perfect piece of pizza!</p>
		  <img id="test-image" width="500" height="500" src="https://imagesvc.meredithcorp.io/v3/mm/image?url=https%3A%2F%2Fimages.media-allrecipes.com%2Fuserphotos%2F884222.jpg">
		  <h3>You, too, can enjoy this deliciousness.</h3>
	   </section>
	</header> 

	<section id="blog-text">
	 	<p>
	 	There was a stretch of my life that pizza was not a treat for me. I finally figured out that it was the sauce – some store-bought pizzas had so much sauce on them, and I found it too strong. My view on pizza changed after realizing that I could put different sauces on my homemade pizza to change things up.
	 </p>
	 <p>
	  Using prepared salad dressing, either Ranch or Caesar dressing, make pizza night even easier. It allows you to make a variety of pizzas so that there is something for everyone, even those that don’t like tomato sauce.
	</p>
	<center><iframe width="727" height="409" src="https://www.youtube.com/embed/RXOAeyJCq28" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></center>

	<p>This Gourmet Chicken Pizza, topped with a creamy salad dressing and chicken, is a favourite of everyone in our family.</p>

  </section>

<article>	 
	 <section id="ingredients" class="card">
	 	<h3>Ingredients</h3>
	 	<ul>
	 	  <li>2 skinless, boneless chicken breast halves</li>
	 	  <li>1 (10 ounce) can refrigerated pizza crust</li>
	 	  <li>½ cup Ranch-style salad dressing</li>
	 	  <li>1 cup shredded mozzarella cheese</li>
	 	  <li>1 cup shredded Cheddar cheese</li>
	 	  <li>1 cup chopped tomatoes</li>
	 	  <li>¼ cup chopped green onions</li>
	 	</ul>
	 </section>

	 <section id="directions" class="card">
	 	<h3>Directions</h3>
	 	<ol>
	 	  <li>
	 	  	<p>Preheat oven to 425 degrees F (220 degrees C). Lightly grease a pizza pan or medium baking sheet.</p>
	 	  </li>
	 	  <li>
	 	  	<p>Place chicken in a large skillet over medium-high heat. Cook until no longer pink, and juices run clear. Cool, then either shred or chop into small pieces.</p>
	 	  </li>
	 	  <li>
	 	  	<p>Unroll dough, and press into the prepared pizza pan or baking sheet. Bake crust for 7 minutes in the preheated oven, or until it begins to turn golden brown. Remove from oven.</p>
	 	  </li>
	 	  <li>
	 	  	<p>Spread ranch dressing over partially baked crust. Sprinkle on mozzarella cheese. Place tomatoes, green onion, and chicken on top of mozzarella cheese, then top with Cheddar cheese. Return to the oven for 20 to 25 minutes, until cheese is melted and bubbly.</p>
	 	  </li>
	 	</ol>
	 </section>

	 <section>
	 	<img width="200" height="200"src="https://imagesvc.meredithcorp.io/v3/mm/image?url=https%3A%2F%2Fimages.media-allrecipes.com%2Fuserphotos%2F1647961.jpg&w=596&h=792&c=sc&poi=face&q=85">
	 	<img width="200" height="200"src="https://imagesvc.meredithcorp.io/v3/mm/image?url=https%3A%2F%2Fimages.media-allrecipes.com%2Fuserphotos%2F230287.jpg&w=596&h=399&c=sc&poi=face&q=85">
	 	<img width="200" height="200"src="https://imagesvc.meredithcorp.io/v3/mm/image?url=https%3A%2F%2Fimages.media-allrecipes.com%2Fuserphotos%2F750984.jpg&w=596&h=399&c=sc&poi=face&q=85">
	 	<img width="200" height="200"src="https://imagesvc.meredithcorp.io/v3/mm/image?url=https%3A%2F%2Fimages.media-allrecipes.com%2Fuserphotos%2F855364.jpg&w=596&h=596&c=sc&poi=face&q=85">
	 </section>

</article> 

    <section id="lower-image">
    	<img width="300" height="300" src="https://www.faithfullyglutenfree.com/wp-content/uploads/2014/09/Skillet-Mexican-Chicken.jpg">
    </section>

</main>

<footer>
	<p>You’re running out of time! It’s nearly dinnertime, but you don’t know what to serve the family. My suggestion? This Creamy Mexican Chicken Skillet Dinner.</p>
</footer>

Thursday 2020-06-11 05:42:54 by Derek Seymour

Create In a different time, the Jew Cuban was somebody else. Just a kid, he was smart with numbers but his family was dirt poor. So the school put him in lower stream. His poppy was a bus-driver from the old country who was proud to put food on the table. However, when he found out the school was dicking his protege he caused a stink. He visited the head-master of the school and told him as much. "My son knows numbers" he said. "and you stink!" Naturally, this didn't go down well with the powers that be, and instead of elevating the protege into higher class bracket, they decided to down-grade him, and sent a fancy letter explaining their decision. The Jew Cuban didn't mind. Put him in with all the other momo's? Sure, why not, they were his friends. Why they fuck would he want to be in the top tier anyway, with all those other empty vessels, and their insane parents demanding overtime on the job? Nah, fuck that.


Thursday 2020-06-11 06:56:18 by NewsTools

Created Text For URL [naijanewsagency.com/my-fame-has-brought-me-so-much-hate-my-life-has-been-threatened-bbns-joe-abdallah/]


Thursday 2020-06-11 07:21:04 by oranges

Merge pull request #50497 from stylemistake/tgui-3.0

About The Pull Request

This is a massive rework that is coming down the pipe. Removal of routes.js

This PR removes routes.js file in favor of filename-based routing. DM-code will now reference components directly.

For example, previously, DM code would use "ntos_main" as a key. Now you need to use "NtosMain" as a key, and it should match both the exported name and a file name of your component. Flexible Layout System

As a result of the above, interfaces are now top-level components. To accomodate this change, we have created a new abstraction for the layout: the Window component.

You will now see, that interfaces, instead of returning window contents directly, return a tree that starts with a component: export const Apc = (props, context) => { return ( <Window.Content scrollable> </Window.Content> ); };

Metadata, which was previously added to routes.js (things like theme, wrapper, scrollable) can now be declared on the Window.

This also eliminates the need for a concept called wrapper, because now you can directly control where you render <Window.Content />, which allows adding things like toolbars, status bars, and modal windows.

We also have for NtOS interfaces, and for completely custom, non window-based layouts. (Side-panel tguis, or maybe even goonchat or stat panel replacement, wink wink, nudge nudge, WYCI). Modals

Now since we have a new layout system, modals are now easier to implement than ever! This is because now we have a clear slot for them, which would cover all area above the content with a Dimmer.

This avoids issues like Dimmer being scrollable together with the content, or covering content only partially, and things like that.

export const Apc = (props, context) => { return ( {/* Dimmer/Modal slot starts here /} {showModal && ( {...} )} {/ Dimmer/Modal slot ends here */} <Window.Content scrollable> </Window.Content> ); };

React Context

You have probably noticed, that we have a second argument to components: context.

This is a "magical" state container, that is available on all components without you doing anything extra to support it. It carries the Redux store and all tgui backend related data, meaning that things like useBackend(context) will now use context instead of props.

This also means, that you no longer have to pass around the state via the props:

With context available on all components, this is all you need to do:

Shared TGUI states

We introduce a new abstraction, that eliminates all previous use-cases for class-based React components - local component state.

You can now achieve the same thing with help of a useLocalState() function:

const AirAlarmControl = (props, context) => { const [screen, setScreen] = useLocalState(context, 'screen'); // Use screen to access current screen. // Use setScreen(nextValue) function to set screen to a new value. };

This also removes the redundant tgui:view action, and config.ui_screen variable, because now this thingie can achieve the same thing in a more generic way.

But wait, there's more!

You can use useSharedState() function, to not only create a piece of state, but also sync it across all clients, which is a fantastic way to allow observers see how user interacts with the UI.

const AirAlarmControl = (props, context) => { const [screen, setScreen] = useSharedState(context, 'screen'); // Now screen will change for everyone who observes the UI. };

useSharedState() is JSON-serializable, which means you can use anything as a state, not only strings and primitive values. Performance Improvements

We have sped up the initial render by about a full frame.

Miscellaneous Fixed operating computer getting stuck on last step of the surgery. All UIs refactored to use new Tabs API Formatters formatPower, outputs watts with SI units, kilo, mega, etc. formatMoney, formats cash with thousand separators and shit. Code for this is stolen from real business applications. Number helpers round(number, precision) helper, with correct 0.5 rounding, and with other float nonsense fixed. toFixed(number, precision) won't throw an exception on negative values Moving stuff around in webpack DOM polyfills get their own directory, and are bundled together with the rest of the code. 60KB -> 20KB, indirectly adds speed to initial render. Stylesheets are now imported in index.js (well, everything is now imported in index.js now). Achievements UI cleaned up, smaller, neater UI. Black Market Uplink cleaned up Generalized Uplinks, Malf Module Picker now uses this generic uplink UI, with custom theme still applied. Saved a few kilobytes. Uplinks limit search results to 25, which helps reduce lag while typing into the search box. Added padding props to Box, aka all this crap: p, px, py, pt, pr, pl, pb. Reduced stats while building the bundle, now you can actually see the meaningful green text in console instead of all the child module spam.

Flattened Crafting Categories

New Kiosk UI

Modal Tweaks

You can track progress of other tgui ideas here: tgui: Roadmap

Information for downstreams

Your tgui modifications will not be easily mergeable because we have effectively shifted indentation of all interfaces by two, and added a new wrapping component called .

This will be a lot of manual work, but it's not that terrible though. If you can isolate your local contributions to tgui, you can just copy-paste them into the new tgui where needed, and it should just work.

If you're not yet using tgui 2.0 (or tgui-next) in your project - great news, this is the final big refactor, and everything will be quieter after this PR. Things are looking really solid now. This will serve as a great base for your future UI work. Acknowledgements

Big thanks to @actioninja for converting half of the interfaces, that's a lot of work!


Thursday 2020-06-11 07:56:26 by Marko Grdinić

"8:15am. During the night I've been laser focused on what I want to do. But before I start today let me chill for a while.

8:35am. Ok, I am ready...to start reading those Her Majesty Swarm volumes I found yesterday night. Let me do that for a while - and then I will start programming.

9am. Uf, I haven't even started reading it. Just 10m and then I'll start programming.

9:30am. Ok, let me start. I indulged myself a bit. Let me see if I can do the prototype before the morning is over.

Every person (of intelligence) in this world contributes something to it at least once. And my contribution to it will definitely be staged functional programming in the form of Spiral. My design sense is the thing I need to share.

I'll do this only once. As a human I do not feel like doing more than this. I am supposed to be selfish and lazy, but here I am spending my days working for free.

...I want...my power.

I want it more than anything else. I want the life to move to the fun parts. I want the things that I do to have real effect upon reality. A person like me is not meant to be at the bottom forever.

And today, I will make a small step towards proving that.

9:40am. I can't tell at all how months down the road the interviewing will go. Ideally the people would get it, but that idiot schizo plus trying to explain the self improvement loop to one of my acquaintances killed me. Because of that I do not have much confidence in my ability to highlight the importance of my work. The guys at the ML sub seem to be perfectly content with PyTorch and Tensorflow.

I really did think that there would be at least a few other people enamored with the approach I am taking.

...It is a pity.

9:45am. Well, v0.2 needs to be given a shot. It is true that programming the way I did in 2018 in Spiral became unbearable even for me. If that is the case for me, then things have to be much worse for other people.

9:50am. Having confidence or not does not matter. I might not be good at talking to morons, but I have at least enough brains to recognize this fact. The only thing that matters is recognizing what I have to do and then doing it. I need to keep going and try. I need to risk failure.

One vision, one purpose.

You can recognize mediocrities by their parroting of the wisdom that technical skills do not matter. It would absolutely shock the normies if they were to ever find out that social skills were just a skill and that you can allocate autism points into it just as anything else."


Thursday 2020-06-11 11:22:18 by Jeremy Cline

kdump: add support for crashkernel=auto

Rebased for v5.3-rc1 because the documentation has moved.

Message-id: <20180604013831.574215750@redhat.com>
Patchwork-id: 8166
O-Subject: [kernel team] [PATCH RHEL8.0 V2 2/2] kdump: add support for crashkernel=auto
Bugzilla: 1507353
RH-Acked-by: Don Zickus <dzickus@redhat.com>
RH-Acked-by: Baoquan He <bhe@redhat.com>
RH-Acked-by: Pingfan Liu <piliu@redhat.com>

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1507353
Build: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=16534135
Tested: ppc64le, x86_64 with several memory sizes.
        kdump qe tested 160M on various x86 machines in lab.

We continue to provide crashkernel=auto like we did in RHEL6
and RHEL7,  this will simplify the kdump deployment for common
use cases that kdump just works with the auto reserved values.
But this is still a best effort estimation, we can not know the
exact memory requirement because it depends on a lot of different
factors.

The implementation of crashkernel=auto is simplified as a wrapper
to use below kernel cmdline:
x86_64: crashkernel=1G-64G:160M,64G-1T:256M,1T-:512M
s390x:  crashkernel=4G-64G:160M,64G-1T:256M,1T-:512M
arm64:  crashkernel=2G-:512M
ppc64:  crashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G

The difference between this way and the old implementation in
RHEL6/7 is we do not scale the crash reserved memory size according
to system memory size anymore.

Latest effort to move upstream is below thread:
https://lkml.org/lkml/2018/5/20/262
But unfortunately it is still unlikely to be accepted, thus we
will still use a RHEL only patch in RHEL8.

Copied old patch description about the history reason see below:
'''
    Non-upstream explanations:
    Besides "crashkenrel=X@Y" format, upstream also has advanced
    "crashkernel=range1:size1[,range2:size2,...][@offset]", and
    "crashkernel=X,high{low}" formats, but they need more careful
    manual configuration, and have different values for different
    architectures.

    Most of the distributions use the standard "crashkernel=X@Y"
    upstream format, and use crashkernel range format for advanced
    scenarios, heavily relying on the user's involvement.

    While "crashkernel=auto" is redhat's special feature, it exists
    and has been used as the default boot cmdline since 2008 rhel6.
    It does not require users to figure out how many crash memory
    size for their systems, also has been proved to be able to work
    pretty well for common scenarios.

    "crashkernel=auto" was tested/based on rhel-related products, as
    we have stable kernel configurations which means more or less
    stable memory consumption. In 2014 we tried to post them again to
    upstream but NACKed by people because they think it's not general
    and unnecessary, users can specify their own values or do that by
    scripts. However our customers insist on having it added to rhel.

    Also see one previous discussion related to this backport to Pegas:
    On 10/17/2016 at 10:15 PM, Don Zickus wrote:
    > On Fri, Oct 14, 2016 at 10:57:41AM +0800, Dave Young wrote:
    >> Don, agree with you we should evaluate them instead of just inherit
    >> them blindly. Below is what I think about kdump auto memory:
    >> There are two issues for crashkernel=auto in upstream:
    >> 1) It will be seen as a policy which should not go to kernel
    >> 2) It is hard to get a good number for the crash reserved size,
    >> considering various different kernel config options one can setups.
    >> In RHEL we are easier because our supported Kconfig is limited.
    >> I digged the upstream mail archive, but I'm not sure I got all the
    >> information, at least Michael Ellerman was objecting the series for
    >> 1).
    > Yes, I know.  Vivek and I have argued about this for years.  :-)
    >
    > I had hoped all the changes internally to the makedumpfile would allow
    > the memory configuration to stabilize at a number like 192M or 128M and
    > only in the rare cases extend beyond that.
    >
    > So I always treated that as a temporary hack until things were better.
    > With the hope of every new RHEL release we get smarter and better. :-)
    > Ideally it would be great if we could get the number down to 64M for most
    > cases and just turn it on in Fedora.  Maybe someday.... ;-)
    >
    > We can have this conversation when the patch gets reposted/refreshed
    > for upstream on rhkl?
    >
    > Cheers,
    > Don

    We had proposed to drop the historic crashkernel=auto code and move
    to use crashkernel=range:size format and pass them in anaconda.

    The initial reason is crashkernel=range:size works just fine because
    we do not need complex algorithm to scale crashkernel reserved size
    any more.  The old linear scaling is mainly for old makedumpfile
    requirements, now it is not necessary.

    But With the new approach, backward compatibility is potentially at risk.
    For e.g. let's consider the following cases:
    1) When we upgrade from an older distribution like rhel-alt-7.4(which
    uses crashkernel=auto) to rhel-alt-7.5 (which uses the crashkernel=xY
    format)
    In this case we can use anaconda scripts for checking
    'crashkernel=auto' in kernel spec and update to the new
    'crashkernel=range:size' format.
    2) When we upgrade from rhel-alt-7.5(which uses crashkernel=xY format)
    to rhel-alt-7.6(which uses crashkernel=xY format), but the x and/or Y
    values are changed in rhel-alt-7.6.
    For example from crashkernel=2G-:160M to crashkernel=2G-:192M, then we have
    no way to determine if the X and/or Y values were distribution
    provided or user specified ones.
    Since it is recommended to give precedence to user-specified values,
    so we cannot do an upgrade in such a case."

    Thus turn back to resolve it in kernel, and add a simpler version
    which just hacks to use the range:size style in code, and make
    rhel-only code easily to maintain.
'''

Signed-off-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Herton R. Krzesinski <herton@redhat.com>

Upstream Status: RHEL only Signed-off-by: Jeremy Cline jcline@redhat.com


Thursday 2020-06-11 11:32:25 by Linus Torvalds

Revert "x86/apic: Include the LDR when clearing out APIC registers"

[ Upstream commit 950b07c14e8c59444e2359f15fd70ed5112e11a0 ]

This reverts commit 558682b5291937a70748d36fd9ba757fb25b99ae.

Chris Wilson reports that it breaks his CPU hotplug test scripts. In particular, it breaks offlining and then re-onlining the boot CPU, which we treat specially (and the BIOS does too).

The symptoms are that we can offline the CPU, but it then does not come back online again:

smpboot: CPU 0 is now offline
smpboot: Booting Node 0 Processor 0 APIC 0x0
smpboot: do_boot_cpu failed(-1) to wakeup CPU#0

Thomas says he knows why it's broken (my personal suspicion: our magic handling of the "cpu0_logical_apicid" thing), but for 5.3 the right fix is to just revert it, since we've never touched the LDR bits before, and it's not worth the risk to do anything else at this stage.

[ Hotpluging of the boot CPU is special anyway, and should be off by default. See the "BOOTPARAM_HOTPLUG_CPU0" config option and the cpu0_hotplug kernel parameter.

In general you should not do it, and it has various known limitations (hibernate and suspend require the boot CPU, for example).

But it should work, even if the boot CPU is special and needs careful treatment - Linus ]

Link: https://lore.kernel.org/lkml/156785100521.13300.14461504732265570003@skylake-alporthouse-com/ Reported-by: Chris Wilson chris@chris-wilson.co.uk Acked-by: Thomas Gleixner tglx@linutronix.de Cc: Bandan Das bsd@redhat.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Lee Jones lee.jones@linaro.org Change-Id: I405efcbe5e93a4ac13af05a91fc38f29fecb958b


Thursday 2020-06-11 11:42:33 by Antipurity

I feel the rot of non-fundamentality in 'generation'.

I'm not motivated to work on what's left to do there, at all (though I probably should at least test & fix the code I've made — and probably spend a day training all these NNs).

How could any of this compare to MuZero?

MuZero implementations play mostly on 2D boards and precise-position-is-everything game states; we rely on graphs.

"Maximizing a prediction" (or sampling it) to pick the best option during generation, could be (like) the prediction network (from state to policy and value)… Except, we can't just maximize an option-neutral prediction; we have to have a function from an option (which can be a real, a graph, or object-id — all handled in its own way) to a known-channel prediction of how well it would do (where, once the predictable is known, the picked option is adjusted to be more correct).

Self-play and training are the same here; theoretically, could be separated into (turning adjust-less play into a dynamic dataset) and (training on the dataset).

No tree search, and no learned dynamics. What would such things be, for execution state? It's not like we're playing a game here, where the concept "state transitions" makes sense. Maybe for predicting human input and responding to it, this could make some sense (user clicked on X, so we present Y, then the user types "no, I meant Z"; seems unfeasibly difficult for NNs, though). But we're just executing a function. Or are we? Our GNN framework can do graph rewrites guided by NNs: transitions from graph to graph. Maybe, if each executable graph had a hidden representation (GNN from graph to 'state') (repr net), and from state and rewrite's ID we could predict new state and its loss-after-training (dynamics net), and for each state we could predict which rewrite IDs are worth exploring and the final best loss we could achieve (prediction net)… This sounds exactly like MuZero. In fact, since the actual dynamics are so expensive, approximating them will probably be much better than MuZero. However, the actual dynamics of AutoML are very noisy, so it might get tripped up by that. AND BY THE LACK OF COMPUTE IN MY POTATO PC

But how would inner learned functions fit into all this? And, I don't represent syntactic macros in my SSA, forcing hacks like match_id is for a hypothetical if; should I?

This all sounds important and interesting; maybe I should formalize it a bit more and ask someone else? (Has this whole venture been just another tool to understand this better, I wonder?)


Thursday 2020-06-11 13:50:05 by PlsJumpForMe

Omg i fucking finally fixed the T94 MAX

holy crap it works


Thursday 2020-06-11 15:46:33 by quietly-turning

more sensible zoom/positioning of StepStats

re: github.com/quietly-turning/Simply-Love-SM5/pull/167#issuecomment-549084002

"Yeah, my code there was ugly for sure, so don't worry about it. I rushed that out in 2018 as part of an effort to completely abandon the theme after the v4.7 release. I should probably go back and clean it up someday..."


Thursday 2020-06-11 16:58:42 by Marko Rodriguez

what I was struggling on for 3 hours last night, I figured out in 5 minutes this morning. I was really close last night. When a user specified range is provided via <=, we have to name the objs coming out of the computation as such. We no longer need a string in [define]. The range is the name of the definition. Also, because there is no name, its point-free style. Also we can now have as many x<=y patterns [defined] for all the different ways of mapping and object of type y to and object of type x.


Thursday 2020-06-11 18:04:37 by Sameer Khan

cybercrimes committed against me and Ontarians

In mid Nov 2019 I provided my report to the WRPS to register a cybercrime. They in turn started harassing me and did everything to illegally paint me as a person with paranoia to coverup their violations instead of actually addressing the cybercrimes that impacted all Ontarians. These cybercrimes with digital viruses had the signature design of a long gestation period and had the typical m.o. of steps carried out by the CIA + NSA against Afghanistan, Syria, Iraq and Iran which, typically is followed with a biological virus attack. Groups and agencies were directing the very same mode of attacks against US, Canada, UK and EU taken straight out of the war doctrines of western military forces.

The knowledge about covid19 pandemic that was made public to Canadians in Feb-March 2020 was already available to analysts like me (without a shadow of doubt) in Sept-Oct 2019. By Dec 2019 many of my sources and contacts looking into issues steaming from unethical military activities by US coalition forces in South China Sea had suffered cybercrimes against them with massive data breaches against our computer and communication networks. These activities threatened our lives and that of our families amidst which the local police authorities not only dismissed our safety concerns but actively erased evidence of their misdeeds, violations, corruptions and malpractices!


Thursday 2020-06-11 18:58:04 by Choxflan

did anyone ask for Aragonese flavor ? no ? then fuck you

3 aragonese decisions to reclaim former crown lands 1 event for aragonese greece restoration based on the crusader state of the duchy of Athens and Neopatria removed catalonian cores from valencia due to them joining the separatist movement upon final aragonese dissolution

did all this due to knowing ppl always click on aragon in mods where they can tbh


Thursday 2020-06-11 21:25:32 by Dmitry Kazakov

Fix "Stroke Selection" when any selection tool is activated

This patch is a bit hackish. The actual bug is caused by the per-tool opacity patch (6daf2cb7), which is still planned to be refactored.

The problem is that we store opacity and the preset(!), not in the resource server itself. Therefore, if any paint tool is first activated when there is no preset available, then it will remember default opacity (which was 0.0 before this patch) and all non-painting tools will use this opacity as default.

In the future we should refactor per-tool opacity, so that it would not write to the presets (which makes them dirty when switching tools, which is a bug). The non-painting tools should have some flag, that would tell the resource provider that the opacity should be stored not in the preset, but separately.

BUG:421752


Thursday 2020-06-11 23:44:36 by Saqib Ali

POV: COVID-19 Shows Us We Need Rapid Response Data Science Teams | BU Today | Boston University. Data Scientist – Business Intelligence – 1000ml – Train & Verify. Data Scientist – Business Intelligence – 1000ml – Train & Verify. Data Scientist – Business Intelligence – 1000ml – Train & Verify. Data scientists inventing new tools to rapidly analyze the spread, evolution of novel coronavirus | Scripps Research. Can Rapid Response Data Science Teams Help Prevent Future Pandemics? | Rafik Hariri Institute for Computing and Computational Science & Engineering. Sr Data Scientist with python and SQL experience working within a big dataenvironment – 1000ml – Train & Verify. Job Application for Data Scientist (Journeyman) at Novetta. Job Application for Data Scientist at Novetta. Job Application for Data Scientist at Novetta.


< 2020-06-11 >