-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: In cluster dialer to proxy TCP connections to unexposed services #688
Conversation
Added code that will proxy TCP connections via in cluster pod. This is useful for accessing k8s services that are not exposed. Signed-off-by: Matej Vasek <mvasek@redhat.com>
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: matejvasek The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov Report
@@ Coverage Diff @@
## main #688 +/- ##
==========================================
+ Coverage 38.03% 39.51% +1.47%
==========================================
Files 42 42
Lines 3917 3950 +33
==========================================
+ Hits 1490 1561 +71
+ Misses 2225 2172 -53
- Partials 202 217 +15
Continue to review full report at Codecov.
|
Signed-off-by: Matej Vasek <mvasek@redhat.com>
Signed-off-by: Matej Vasek <mvasek@redhat.com>
defer cancel() | ||
delOpts := metaV1.DeleteOptions{} | ||
|
||
return c.coreV1.Pods(c.namespace).Delete(ctx, c.podName, delOpts) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rhuss @markusthoemmes is there a way to make pod to be automatically delete upon completion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can do this if you are using a higher-level abstraction like Deployment
or ReplicaSet
if you want them to manage the lifecycle of your pod, otherwise, you have to do it on your own.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, what do you mean with 'upon' completion ? When all containers in the pod stop ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, when all processes exited.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the app for some reason crashes the Close()
may not be called so there would be dangling completed pods.
@rhuss @markusthoemmes When I |
Maybe with a second exec sh -c "cat /tmp/pid.txt | xargs kill" (and writing the PID to a pid file in the first exec) ? |
At least for the |
Yes, closing stdin should end |
@rhuss @markusthoemmes Something like: func init() {
if dt, ok := http.DefaultTransport.(*http.Transport); ok {
dc := dt.DialContext
newDC := mixedDialer{defaultDialContext: dc}
dt.DialContext = newDC.DialContext
}
}
type mixedDialer struct {
o sync.Once
inClusterDialer *contextDialer
inClusterDialerFailed bool
defaultDialContext func(ctx context.Context, network, addr string) (net.Conn, error)
}
func (m *mixedDialer) DialContext(ctx context.Context, network string, addr string) (net.Conn, error) {
host, _, err := net.SplitHostPort(addr)
if err != nil {
return nil, err
}
if !m.inClusterDialerFailed && (strings.HasSuffix(host, ".svc") || strings.HasSuffix(host, ".cluster.local")) {
m.o.Do(func() {
m.inClusterDialer, err = NewInClusterDialer(context.Background())
if err != nil {
m.inClusterDialerFailed = true
}
})
return m.inClusterDialer.DialContext(ctx, network, addr)
}
return m.defaultDialContext(ctx, network, addr)
}
func (m *mixedDialer) Close() error {
return m.inClusterDialer.Close()
} |
I've discussed a similar thing over at
|
@knative-sandbox/kn-plugin-func-approvers is this good to go? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@matejvasek do you think the socat image would be better under the boson
namespace at quay.io? I'm kind of concerned about it being under a personal account.
@lance the image already exists in docker.io but I am mirroring it to avoid the pull limits. Also we might need to productize the image. |
Signed-off-by: Matej Vasek <mvasek@redhat.com>
@lance I updated the image. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Added code that will proxy TCP connections via in cluster pod.
This is useful for accessing k8s services that are not exposed.
The connections are created using standard Go dial function:
DialContext(ctx context.Context, network string, addr string) (net.Conn, error)
.Mechanism:
Upon creation of dialer we create a pod with command
sleep infinity
.Then each time
DialContext
function is called we executesocat - TCP:addr
in the pod and use its stdio as source for our implementation ofnet.Conn
.